Compare commits
430 Commits
Author | SHA1 | Date | |
---|---|---|---|
|
097791a380 | ||
|
548aa65973 | ||
|
c8bf012b13 | ||
|
febfa7eef7 | ||
|
15fae1ba47 | ||
|
a04760a08b | ||
|
78fee8e05b | ||
|
d30a970ea8 | ||
|
8c6a3f5142 | ||
|
e1914963a6 | ||
|
c0a258afef | ||
|
cb61e92a13 | ||
|
963ccf68eb | ||
|
c98bef59b8 | ||
|
9948983b15 | ||
|
dd3f5584f6 | ||
|
8615cfc8d9 | ||
|
f6ef492a1d | ||
|
67b4b38a12 | ||
|
39187d64d5 | ||
|
ad20475f11 | ||
|
b76d853800 | ||
|
914fb89687 | ||
|
b5379ad6b7 | ||
|
3b04fa8c02 | ||
|
23e2f299b8 | ||
|
e6d8d83c96 | ||
|
e689e0da09 | ||
|
9938548dda | ||
|
f0df91d31f | ||
|
dbee4f9d6e | ||
|
c760abf0ca | ||
|
ac30a7e5ea | ||
|
8146f37681 | ||
|
7e3e61b76b | ||
|
eb3e413c6a | ||
|
339e501042 | ||
|
1e8bd45c63 | ||
|
519d087a88 | ||
|
d87927ed85 | ||
|
852cf2c3f0 | ||
|
8124f74317 | ||
|
f9794f526a | ||
|
e1f1d3de1b | ||
|
f7767ddc57 | ||
|
833399b1d0 | ||
|
e2759dae6f | ||
|
44f2b978ac | ||
|
ffa9824cd2 | ||
|
944fcb2da7 | ||
|
0e382353c7 | ||
|
faa8073f56 | ||
|
902cccd218 | ||
|
17f8382daa | ||
|
6538e4ba72 | ||
|
a9cee48feb | ||
|
396a90c03b | ||
|
b01d2c2d18 | ||
|
2a5e32fc9f | ||
|
8e979dce3b | ||
|
cde061aa9b | ||
|
929a1f9dee | ||
|
a988d3398d | ||
|
b43ef9a11e | ||
|
2c9296cc4b | ||
|
f2c474e636 | ||
|
fab23a27aa | ||
|
f8420c16c8 | ||
|
132eb89bc8 | ||
|
a43faae14a | ||
|
7ffd3512be | ||
|
39a724e109 | ||
|
07de677d04 | ||
|
f625f8d5c3 | ||
|
392cd6ddbf | ||
|
63561e4d05 | ||
|
33f374def5 | ||
|
6b56bb2b72 | ||
|
0d94b6e4cf | ||
|
46e1bb2cc3 | ||
|
a0879b8167 | ||
|
15db0882ae | ||
|
c1d6d93374 | ||
|
a601ecc468 | ||
|
2cd1af070e | ||
|
d1e712de90 | ||
|
27f482bd9b | ||
|
1bbefa8132 | ||
|
cdc6447b88 | ||
|
b8069c547b | ||
|
2ae85e8dcb | ||
|
975239ecc9 | ||
|
34c07f3e5c | ||
|
7cbb97100e | ||
|
a5041e1cf3 | ||
|
fa04ba6d29 | ||
|
e45a9c04f3 | ||
|
b515d93963 | ||
|
c81ddd6e96 | ||
|
115edc0551 | ||
|
9befa479b9 | ||
|
985634be7f | ||
|
f6f0db84be | ||
|
32eaf99217 | ||
|
3a44ec93c9 | ||
|
ffaa3d2113 | ||
|
779b7551fa | ||
|
036ea2be75 | ||
|
5fc22ebca9 | ||
|
7b3b230f47 | ||
|
2a811e282b | ||
|
9681de43de | ||
|
5a8f33df0f | ||
|
b963844fec | ||
|
a156587c6e | ||
|
15a73c9e36 | ||
|
6e1524ef46 | ||
|
7a0f6d99c6 | ||
|
a929ab5644 | ||
|
1ce0fbabc1 | ||
|
398d05c997 | ||
|
7a878def1a | ||
|
8364519d61 | ||
|
77392d6ad8 | ||
|
d6b173977b | ||
|
68c1dae851 | ||
|
309e228591 | ||
|
e580550561 | ||
|
7781fbef0e | ||
|
02f7e12546 | ||
|
7680931f88 | ||
|
17ce9ec445 | ||
|
51d0c51ee2 | ||
|
3904838518 | ||
|
2cd50c6ff8 | ||
|
13bf7b6af0 | ||
|
33c53e101a | ||
|
cd461a9333 | ||
|
dab06c96e4 | ||
|
9c7cfd7a53 | ||
|
b15eac47a6 | ||
|
1b3398e54e | ||
|
433a5fa6c7 | ||
|
9a0883e8f2 | ||
|
73a8bda8bd | ||
|
d1c3f58399 | ||
|
094b61b66c | ||
|
e38d6aed78 | ||
|
dec0d4c11d | ||
|
3f16363ff1 | ||
|
e38f29772d | ||
|
81adad7ae4 | ||
|
498fa5afe7 | ||
|
5f4111249a | ||
|
26a6c23156 | ||
|
ec91e90f08 | ||
|
28ed96a319 | ||
|
88101a2274 | ||
|
ab67f9c98c | ||
|
7cc3351ff8 | ||
|
e454db847d | ||
|
e3e006cbcc | ||
|
4af6f26acc | ||
|
e1cc7af587 | ||
|
58ed0277e3 | ||
|
ccf9f3a32d | ||
|
d5f5cec2f9 | ||
|
3f5e636bc3 | ||
|
54e6163356 | ||
|
6764850dca | ||
|
15701bbe26 | ||
|
6c6cb23be1 | ||
|
9abb26714b | ||
|
86d06696df | ||
|
702c2e65d3 | ||
|
f82928c33e | ||
|
ec6480dd4c | ||
|
a22e7cd960 | ||
|
3f1666ec24 | ||
|
55babc8300 | ||
|
cb6307b799 | ||
|
92fd5b54ed | ||
|
5a3f8d714b | ||
|
e1ea3d7515 | ||
|
2ea5513286 | ||
|
761abc7611 | ||
|
4b17f8fbcd | ||
|
8c5dd01964 | ||
|
bb1bd7d4db | ||
|
b1ed0589b2 | ||
|
1deb51287b | ||
|
94a23e5b05 | ||
|
a7119b5bda | ||
|
674cdd0df0 | ||
|
d8a5c4ffd5 | ||
|
3a30bd8fed | ||
|
5a071e502c | ||
|
4fa27a3ca9 | ||
|
ccf3740b5b | ||
|
4250b68b0f | ||
|
9c1c474dc2 | ||
|
69dcfa5277 | ||
|
a7e4b23350 | ||
|
b8ec64414c | ||
|
86a9be5c33 | ||
|
145b166720 | ||
|
b06ce86784 | ||
|
68d6e221a1 | ||
|
76eaa3d3c1 | ||
|
15f55be936 | ||
|
715dd93150 | ||
|
ab92fece63 | ||
|
62998adab2 | ||
|
c83497b685 | ||
|
38aa0d01d5 | ||
|
c9488eb1f9 | ||
|
3a36dab7ca | ||
|
c4bf0b3a47 | ||
|
6b53539738 | ||
|
b08acf2457 | ||
|
7ef16f1240 | ||
|
5f9bb1aaa6 | ||
|
3ca724d928 | ||
|
fa0e458b3e | ||
|
ca36721b81 | ||
|
604acd1870 | ||
|
867257d59a | ||
|
06c4189bf9 | ||
|
5fa7579794 | ||
|
aa3998ee3a | ||
|
4f35fda4b2 | ||
|
c6506097fd | ||
|
f30875aa58 | ||
|
5f50e6f244 | ||
|
5846009648 | ||
|
0706d83133 | ||
|
d57db31395 | ||
|
986c4b96b0 | ||
|
939ac11774 | ||
|
13ac2a6641 | ||
|
207e74ecd4 | ||
|
0e56cd1e9a | ||
|
f32cd21452 | ||
|
0e002df132 | ||
|
066dde1110 | ||
|
6bf1747822 | ||
|
97cc2abc7c | ||
|
d4d4a05695 | ||
|
9da5dda258 | ||
|
400b8cd097 | ||
|
f3525fe363 | ||
|
e847b7f62c | ||
|
086676cdc5 | ||
|
321671b879 | ||
|
91350b05fb | ||
|
91c0faf004 | ||
|
2e0fc456be | ||
|
1101ebed73 | ||
|
71a59a08c7 | ||
|
3d28249c19 | ||
|
2d4000410d | ||
|
9ed6dca696 | ||
|
a0a6411449 | ||
|
c546ba83da | ||
|
576a4288c8 | ||
|
75222752d2 | ||
|
2769cecfef | ||
|
a63cc05a7f | ||
|
dab0603847 | ||
|
7b27a9ad49 | ||
|
7ad77e1a6b | ||
|
0898e6fa65 | ||
|
503df993c0 | ||
|
f3c81c1662 | ||
|
f57e05cbed | ||
|
23d264196f | ||
|
209aeaf5be | ||
|
b2410e2dab | ||
|
22daa08f60 | ||
|
cc043c43d1 | ||
|
27f6470f5c | ||
|
6de8c36fba | ||
|
c3095ee6e0 | ||
|
777bc5b0c0 | ||
|
704f6518ec | ||
|
53c9c407ed | ||
|
3ef709c84e | ||
|
89270bf0fa | ||
|
6172382d1b | ||
|
fca7f3a9a0 | ||
|
994ef67d21 | ||
|
97cb124f24 | ||
|
d3660e5474 | ||
|
502dcb72ed | ||
|
41b92af023 | ||
|
641b6cb856 | ||
|
ff9b19bbf5 | ||
|
9d77c781bc | ||
|
1e55e457c3 | ||
|
bfe44afdf8 | ||
|
8c876810bc | ||
|
61a03cb24b | ||
|
3e972418a9 | ||
|
3375a5b613 | ||
|
b92a30910c | ||
|
e71c029cf1 | ||
|
5c63250893 | ||
|
cc7a937188 | ||
|
cccabbf89f | ||
|
6f885bf313 | ||
|
b243e93f94 | ||
|
8759703a8b | ||
|
20ba35f9a2 | ||
|
9ccdcccf17 | ||
|
743fa08e8f | ||
|
96cceeb539 | ||
|
51b0fd8453 | ||
|
95ef30ba72 | ||
|
2d7e6a1283 | ||
|
b7dcd5b348 | ||
|
c0dd5f5713 | ||
|
c1b93f5531 | ||
|
df3462e205 | ||
|
d3e4d6e198 | ||
|
81a0941b1d | ||
|
180f0a5041 | ||
|
91fa3bb642 | ||
|
0a275ab34f | ||
|
eda558c0d5 | ||
|
1e7289dfe0 | ||
|
d48e95b8c3 | ||
|
edc1b83c5f | ||
|
0614c55fc3 | ||
|
fe5565dbcf | ||
|
368d8363da | ||
|
1e8dd33559 | ||
|
30c7eab049 | ||
|
dbd99b37d1 | ||
|
41f1d0a5a6 | ||
|
a38d21cf91 | ||
|
d51d539067 | ||
|
e8524abea5 | ||
|
6ade92ed83 | ||
|
58817f8e8b | ||
|
5ef3a182e3 | ||
|
212a940e64 | ||
|
262f956ebb | ||
|
8191bbe22c | ||
|
25832b90d5 | ||
|
a0a6066726 | ||
|
ab5a8ec5b6 | ||
|
8f6229c4c7 | ||
|
9de63db1d6 | ||
|
e2b4afbca0 | ||
|
59f3c3b647 | ||
|
2ccd63fdfa | ||
|
7ae1d69b07 | ||
|
173f8f47b2 | ||
|
6b525310c6 | ||
|
0293462aee | ||
|
76c2977095 | ||
|
f546f72bbd | ||
|
15c096887f | ||
|
d2b883a073 | ||
|
f94d4cc9de | ||
|
39bc516674 | ||
|
7f62a62f28 | ||
|
702d728cba | ||
|
fffe1e01dc | ||
|
8a84fdbe38 | ||
|
b2c5f30d16 | ||
|
90f9c7ba23 | ||
|
355f86ccc8 | ||
|
96afafb1ed | ||
|
b4fd827436 | ||
|
f29d1da373 | ||
|
5092cfee63 | ||
|
f28692e0ce | ||
|
4e5a8338b7 | ||
|
f898dec142 | ||
|
119677db04 | ||
|
d2168251d3 | ||
|
107029b180 | ||
|
9cb380eef5 | ||
|
b5c05972dc | ||
|
1c5145c47d | ||
|
1478f30841 | ||
|
045f94086d | ||
|
8512678932 | ||
|
342ed8d932 | ||
|
bd3655b122 | ||
|
19719de0c5 | ||
|
dba0d38ff1 | ||
|
931a692eb9 | ||
|
ff617ac1fd | ||
|
aa2f9576a8 | ||
|
049a4ea974 | ||
|
ae4df55ae4 | ||
|
37dc053972 | ||
|
86688525a1 | ||
|
2fcba12f67 | ||
|
1b8111495a | ||
|
9a36732ebd | ||
|
7be7109e65 | ||
|
00d0dd396e | ||
|
14c5434335 | ||
|
18e294ebb5 | ||
|
451d299c87 | ||
|
ffc75805f0 | ||
|
17947a7e8d | ||
|
9169d735fe | ||
|
be7e7055e2 | ||
|
b4015b98e6 | ||
|
8d2511bbed | ||
|
246bfdb85c | ||
|
29b2011779 | ||
|
ab53d4f98c | ||
|
78edfff1f5 | ||
|
fa6ec17cfb | ||
|
c7ed614cbc | ||
|
d970383384 | ||
|
fddb93ad47 | ||
|
c099863c4f | ||
|
e9326b0a2b | ||
|
159f44a0c5 | ||
|
be119b4a84 | ||
|
408d6fc26b | ||
|
1dedf7d37f | ||
|
ec9abd0b77 | ||
|
e835daf103 |
5
.codespellignore
Normal file
5
.codespellignore
Normal file
@ -0,0 +1,5 @@
|
||||
aks
|
||||
ec2
|
||||
eks
|
||||
gce
|
||||
gcp
|
1
.github/CODEOWNERS
vendored
Normal file
1
.github/CODEOWNERS
vendored
Normal file
@ -0,0 +1 @@
|
||||
* @longhorn/dev
|
48
.github/ISSUE_TEMPLATE/bug.md
vendored
Normal file
48
.github/ISSUE_TEMPLATE/bug.md
vendored
Normal file
@ -0,0 +1,48 @@
|
||||
---
|
||||
name: Bug report
|
||||
about: Create a bug report
|
||||
title: "[BUG]"
|
||||
labels: ["kind/bug", "require/qa-review-coverage", "require/backport"]
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
|
||||
## Describe the bug (🐛 if you encounter this issue)
|
||||
|
||||
<!--A clear and concise description of what the bug is.-->
|
||||
|
||||
## To Reproduce
|
||||
|
||||
<!--Provide the steps to reproduce the behavior.-->
|
||||
|
||||
## Expected behavior
|
||||
|
||||
<!--A clear and concise description of what you expected to happen.-->
|
||||
|
||||
## Support bundle for troubleshooting
|
||||
|
||||
<!--Provide a support bundle when the issue happens. You can generate a support bundle using the link at the footer of the Longhorn UI. Check [here](https://longhorn.io/docs/latest/advanced-resources/support-bundle/).-->
|
||||
|
||||
## Environment
|
||||
|
||||
<!-- Suggest checking the doc of the best practices of using Longhorn. [here](https://longhorn.io/docs/1.5.1/best-practices)-->
|
||||
- Longhorn version:
|
||||
- Installation method (e.g. Rancher Catalog App/Helm/Kubectl):
|
||||
- Kubernetes distro (e.g. RKE/K3s/EKS/OpenShift) and version:
|
||||
- Number of management node in the cluster:
|
||||
- Number of worker node in the cluster:
|
||||
- Node config
|
||||
- OS type and version:
|
||||
- Kernel version:
|
||||
- CPU per node:
|
||||
- Memory per node:
|
||||
- Disk type(e.g. SSD/NVMe/HDD):
|
||||
- Network bandwidth between the nodes:
|
||||
- Underlying Infrastructure (e.g. on AWS/GCE, EKS/GKE, VMWare/KVM, Baremetal):
|
||||
- Number of Longhorn volumes in the cluster:
|
||||
- Impacted Longhorn resources:
|
||||
- Volume names:
|
||||
|
||||
## Additional context
|
||||
|
||||
<!--Add any other context about the problem here.-->
|
49
.github/ISSUE_TEMPLATE/bug_report.md
vendored
49
.github/ISSUE_TEMPLATE/bug_report.md
vendored
@ -1,49 +0,0 @@
|
||||
---
|
||||
name: Bug report
|
||||
about: Create a report to help us improve
|
||||
title: "[BUG]"
|
||||
labels: kind/bug
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
|
||||
## Describe the bug
|
||||
|
||||
A clear and concise description of what the bug is.
|
||||
|
||||
## To Reproduce
|
||||
|
||||
Steps to reproduce the behavior:
|
||||
1. Go to '...'
|
||||
2. Click on '....'
|
||||
3. Perform '....'
|
||||
4. See error
|
||||
|
||||
## Expected behavior
|
||||
|
||||
A clear and concise description of what you expected to happen.
|
||||
|
||||
## Log or Support bundle
|
||||
|
||||
If applicable, add the Longhorn managers' log or support bundle when the issue happens.
|
||||
You can generate a Support Bundle using the link at the footer of the Longhorn UI.
|
||||
|
||||
## Environment
|
||||
|
||||
- Longhorn version:
|
||||
- Installation method (e.g. Rancher Catalog App/Helm/Kubectl):
|
||||
- Kubernetes distro (e.g. RKE/K3s/EKS/OpenShift) and version:
|
||||
- Number of management node in the cluster:
|
||||
- Number of worker node in the cluster:
|
||||
- Node config
|
||||
- OS type and version:
|
||||
- CPU per node:
|
||||
- Memory per node:
|
||||
- Disk type(e.g. SSD/NVMe):
|
||||
- Network bandwidth between the nodes:
|
||||
- Underlying Infrastructure (e.g. on AWS/GCE, EKS/GKE, VMWare/KVM, Baremetal):
|
||||
- Number of Longhorn volumes in the cluster:
|
||||
|
||||
## Additional context
|
||||
|
||||
Add any other context about the problem here.
|
16
.github/ISSUE_TEMPLATE/doc.md
vendored
Normal file
16
.github/ISSUE_TEMPLATE/doc.md
vendored
Normal file
@ -0,0 +1,16 @@
|
||||
---
|
||||
name: Document
|
||||
about: Create or update document
|
||||
title: "[DOC] "
|
||||
labels: kind/doc
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
|
||||
## What's the document you plan to update? Why? Please describe
|
||||
|
||||
<!--A clear and concise description of what the document is.-->
|
||||
|
||||
## Additional context
|
||||
|
||||
<!--Add any other context or screenshots about the document request here.-->
|
24
.github/ISSUE_TEMPLATE/feature.md
vendored
Normal file
24
.github/ISSUE_TEMPLATE/feature.md
vendored
Normal file
@ -0,0 +1,24 @@
|
||||
---
|
||||
name: Feature request
|
||||
about: Suggest an idea/feature
|
||||
title: "[FEATURE] "
|
||||
labels: ["kind/enhancement", "require/lep", "require/doc", "require/auto-e2e-test"]
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
|
||||
## Is your feature request related to a problem? Please describe (👍 if you like this request)
|
||||
|
||||
<!--A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]-->
|
||||
|
||||
## Describe the solution you'd like
|
||||
|
||||
<!--A clear and concise description of what you want to happen-->
|
||||
|
||||
## Describe alternatives you've considered
|
||||
|
||||
<!--A clear and concise description of any alternative solutions or features you've considered.-->
|
||||
|
||||
## Additional context
|
||||
|
||||
<!--Add any other context or screenshots about the feature request here.-->
|
24
.github/ISSUE_TEMPLATE/feature_request.md
vendored
24
.github/ISSUE_TEMPLATE/feature_request.md
vendored
@ -1,24 +0,0 @@
|
||||
---
|
||||
name: Feature request
|
||||
about: Suggest an idea for this project
|
||||
title: "[FEATURE] "
|
||||
labels: kind/enhancement
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
|
||||
## Is your feature request related to a problem? Please describe
|
||||
|
||||
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
|
||||
|
||||
## Describe the solution you'd like
|
||||
|
||||
A clear and concise description of what you want to happen
|
||||
|
||||
## Describe alternatives you've considered
|
||||
|
||||
A clear and concise description of any alternative solutions or features you've considered.
|
||||
|
||||
## Additional context
|
||||
|
||||
Add any other context or screenshots about the feature request here.
|
24
.github/ISSUE_TEMPLATE/improvement.md
vendored
Normal file
24
.github/ISSUE_TEMPLATE/improvement.md
vendored
Normal file
@ -0,0 +1,24 @@
|
||||
---
|
||||
name: Improvement request
|
||||
about: Suggest an improvement of an existing feature
|
||||
title: "[IMPROVEMENT] "
|
||||
labels: ["kind/improvement", "require/doc", "require/auto-e2e-test", "require/backport"]
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
|
||||
## Is your improvement request related to a feature? Please describe (👍 if you like this request)
|
||||
|
||||
<!--A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]-->
|
||||
|
||||
## Describe the solution you'd like
|
||||
|
||||
<!--A clear and concise description of what you want to happen.-->
|
||||
|
||||
## Describe alternatives you've considered
|
||||
|
||||
<!--A clear and concise description of any alternative solutions or features you've considered.-->
|
||||
|
||||
## Additional context
|
||||
|
||||
<!--Add any other context or screenshots about the feature request here.-->
|
24
.github/ISSUE_TEMPLATE/improvement_request.md
vendored
24
.github/ISSUE_TEMPLATE/improvement_request.md
vendored
@ -1,24 +0,0 @@
|
||||
---
|
||||
name: Improvement request
|
||||
about: Suggest an improvement of an existing feature for this project
|
||||
title: "[IMPROVEMENT] "
|
||||
labels: kind/improvement
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
|
||||
## Is your improvement request related to a feature? Please describe
|
||||
|
||||
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
|
||||
|
||||
## Describe the solution you'd like
|
||||
|
||||
A clear and concise description of what you want to happen.
|
||||
|
||||
## Describe alternatives you've considered
|
||||
|
||||
A clear and concise description of any alternative solutions or features you've considered.
|
||||
|
||||
## Additional context
|
||||
|
||||
Add any other context or screenshots about the feature request here.
|
24
.github/ISSUE_TEMPLATE/infra.md
vendored
Normal file
24
.github/ISSUE_TEMPLATE/infra.md
vendored
Normal file
@ -0,0 +1,24 @@
|
||||
---
|
||||
name: Infra
|
||||
about: Create an test/dev infra task
|
||||
title: "[INFRA] "
|
||||
labels: kind/infra
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
|
||||
## What's the test to develop? Please describe
|
||||
|
||||
<!--A clear and concise description of what test/dev infra you want to develop.-->
|
||||
|
||||
## Describe the items of the test development (DoD, definition of done) you'd like
|
||||
|
||||
<!--
|
||||
Please use a task list for items on a separate line with a clickable checkbox https://docs.github.com/en/issues/tracking-your-work-with-issues/about-task-lists
|
||||
|
||||
- [ ] `item 1`
|
||||
-->
|
||||
|
||||
## Additional context
|
||||
|
||||
<!--Add any other context or screenshots about the test infra request here.-->
|
8
.github/ISSUE_TEMPLATE/question.md
vendored
8
.github/ISSUE_TEMPLATE/question.md
vendored
@ -1,13 +1,14 @@
|
||||
---
|
||||
name: Question
|
||||
about: Question on Longhorn
|
||||
about: Have a question
|
||||
title: "[QUESTION] "
|
||||
labels: kind/question
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
## Question
|
||||
> Suggest to use https://github.com/longhorn/longhorn/discussions to ask questions.
|
||||
|
||||
<!--Suggest to use https://github.com/longhorn/longhorn/discussions to ask questions.-->
|
||||
|
||||
## Environment
|
||||
|
||||
@ -15,6 +16,7 @@ assignees: ''
|
||||
- Kubernetes version:
|
||||
- Node config
|
||||
- OS type and version
|
||||
- Kernel version
|
||||
- CPU per node:
|
||||
- Memory per node:
|
||||
- Disk type
|
||||
@ -23,4 +25,4 @@ assignees: ''
|
||||
|
||||
## Additional context
|
||||
|
||||
Add any other context about the problem here.
|
||||
<!--Add any other context about the problem here.-->
|
||||
|
24
.github/ISSUE_TEMPLATE/refactor.md
vendored
Normal file
24
.github/ISSUE_TEMPLATE/refactor.md
vendored
Normal file
@ -0,0 +1,24 @@
|
||||
---
|
||||
name: Refactor request
|
||||
about: Suggest a refactoring request for an existing implementation
|
||||
title: "[REFACTOR] "
|
||||
labels: kind/refactoring
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
|
||||
## Is your improvement request related to a feature? Please describe
|
||||
|
||||
<!--A clear and concise description of what the problem is.-->
|
||||
|
||||
## Describe the solution you'd like
|
||||
|
||||
<!--A clear and concise description of what you want to happen.-->
|
||||
|
||||
## Describe alternatives you've considered
|
||||
|
||||
<!--A clear and concise description of any alternative solutions or features you've considered.-->
|
||||
|
||||
## Additional context
|
||||
|
||||
<!--Add any other context or screenshots about the refactoring request here.-->
|
35
.github/ISSUE_TEMPLATE/release.md
vendored
Normal file
35
.github/ISSUE_TEMPLATE/release.md
vendored
Normal file
@ -0,0 +1,35 @@
|
||||
---
|
||||
name: Release task
|
||||
about: Create a release task
|
||||
title: "[RELEASE]"
|
||||
labels: release/task
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
|
||||
**What's the task? Please describe.**
|
||||
Action items for releasing v<x.y.z>
|
||||
|
||||
**Describe the sub-tasks.**
|
||||
- Pre-Release
|
||||
- [ ] Regression test plan (manual) - @khushboo-rancher
|
||||
- [ ] Run e2e regression for pre-GA milestones (`install`, `upgrade`) - @yangchiu
|
||||
- [ ] Run security testing of container images for pre-GA milestones - @yangchiu
|
||||
- [ ] Verify longhorn chart PR to ensure all artifacts are ready for GA (`install`, `upgrade`) @chriscchien
|
||||
- [ ] Run core testing (install, upgrade) for the GA build from the previous patch and the last patch of the previous feature release (1.4.2). - @yangchiu
|
||||
- Release
|
||||
- [ ] Release longhorn/chart from the release branch to publish to ArtifactHub
|
||||
- [ ] Release note
|
||||
- [ ] Deprecation note
|
||||
- [ ] Upgrade notes including highlighted notes, deprecation, compatible changes, and others impacting the current users
|
||||
- Post-Release
|
||||
- [ ] Create a new release branch of manager/ui/tests/engine/longhorn instance-manager/share-manager/backing-image-manager when creating the RC1
|
||||
- [ ] Update https://github.com/longhorn/longhorn/blob/master/deploy/upgrade_responder_server/chart-values.yaml @PhanLe1010
|
||||
- [ ] Add another request for the rancher charts for the next patch release (`1.5.1`) @rebeccazzzz
|
||||
- Rancher charts: verify the chart is able to install & upgrade - @khushboo-rancher
|
||||
- [ ] rancher/image-mirrors update @weizhe0422 (@PhanLe1010 )
|
||||
- https://github.com/rancher/image-mirror/pull/412
|
||||
- [ ] rancher/charts 2.7 branches for rancher marketplace @weizhe0422 (@PhanLe1010)
|
||||
- `dev-2.7`: https://github.com/rancher/charts/pull/2766
|
||||
|
||||
cc @longhorn/qa @longhorn/dev
|
13
.github/ISSUE_TEMPLATE/task.md
vendored
13
.github/ISSUE_TEMPLATE/task.md
vendored
@ -1,6 +1,6 @@
|
||||
---
|
||||
name: Task
|
||||
about: Task on Longhorn
|
||||
about: Create a general task
|
||||
title: "[TASK] "
|
||||
labels: kind/task
|
||||
assignees: ''
|
||||
@ -9,13 +9,16 @@ assignees: ''
|
||||
|
||||
## What's the task? Please describe
|
||||
|
||||
A clear and concise description of what the task is.
|
||||
<!--A clear and concise description of what the task is.-->
|
||||
|
||||
## Describe the items of the task (DoD, definition of done) you'd like
|
||||
> Please use a task list for items on a separate line with a clickable checkbox https://docs.github.com/en/issues/tracking-your-work-with-issues/about-task-lists
|
||||
## Describe the sub-tasks
|
||||
|
||||
<!--
|
||||
Please use a task list for items on a separate line with a clickable checkbox https://docs.github.com/en/issues/tracking-your-work-with-issues/about-task-lists
|
||||
|
||||
- [ ] `item 1`
|
||||
-->
|
||||
|
||||
## Additional context
|
||||
|
||||
Add any other context or screenshots about the task request here.
|
||||
<!--Add any other context or screenshots about the task request here.-->
|
||||
|
13
.github/ISSUE_TEMPLATE/test.md
vendored
13
.github/ISSUE_TEMPLATE/test.md
vendored
@ -1,6 +1,6 @@
|
||||
---
|
||||
name: Test
|
||||
about: Test task on Longhorn
|
||||
about: Create or update test
|
||||
title: "[TEST] "
|
||||
labels: kind/test
|
||||
assignees: ''
|
||||
@ -9,13 +9,16 @@ assignees: ''
|
||||
|
||||
## What's the test to develop? Please describe
|
||||
|
||||
A clear and concise description of what the test you want to develop.
|
||||
<!--A clear and concise description of what test you want to develop.-->
|
||||
|
||||
## Describe the items of the test development (DoD, definition of done) you'd like
|
||||
> Please use a task list for items on a separate line with a clickable checkbox https://docs.github.com/en/issues/tracking-your-work-with-issues/about-task-lists
|
||||
## Describe the tasks for the test
|
||||
|
||||
<!--
|
||||
Please use a task list for items on a separate line with a clickable checkbox https://docs.github.com/en/issues/tracking-your-work-with-issues/about-task-lists
|
||||
|
||||
- [ ] `item 1`
|
||||
-->
|
||||
|
||||
## Additional context
|
||||
|
||||
Add any other context or screenshots about the test request here.
|
||||
<!--Add any other context or screenshots about the test request here.-->
|
||||
|
34
.github/mergify.yml
vendored
Normal file
34
.github/mergify.yml
vendored
Normal file
@ -0,0 +1,34 @@
|
||||
pull_request_rules:
|
||||
- name: automatic merge after review
|
||||
conditions:
|
||||
- check-success=continuous-integration/drone/pr
|
||||
- check-success=DCO
|
||||
- check-success=CodeFactor
|
||||
- check-success=codespell
|
||||
- "#approved-reviews-by>=1"
|
||||
- approved-reviews-by=@longhorn/maintainer
|
||||
- label=ready-to-merge
|
||||
actions:
|
||||
merge:
|
||||
method: rebase
|
||||
|
||||
- name: ask to resolve conflict
|
||||
conditions:
|
||||
- conflict
|
||||
actions:
|
||||
comment:
|
||||
message: This pull request is now in conflicts. Could you fix it @{{author}}? 🙏
|
||||
|
||||
# Comment on the PR to trigger backport. ex: @Mergifyio copy stable/3.1 stable/4.0
|
||||
- name: backport patches to stable branch
|
||||
conditions:
|
||||
- base=master
|
||||
actions:
|
||||
backport:
|
||||
title: "[BACKPORT][{{ destination_branch }}] {{ title }}"
|
||||
body: |
|
||||
This is an automatic backport of pull request #{{number}}.
|
||||
|
||||
{{cherry_pick_error}}
|
||||
assignees:
|
||||
- "{{ author }}"
|
40
.github/workflows/add-to-projects.yml
vendored
Normal file
40
.github/workflows/add-to-projects.yml
vendored
Normal file
@ -0,0 +1,40 @@
|
||||
name: Add-To-Projects
|
||||
on:
|
||||
issues:
|
||||
types: [ opened, labeled ]
|
||||
jobs:
|
||||
community:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Is Longhorn Member
|
||||
uses: tspascoal/get-user-teams-membership@v1.0.4
|
||||
id: is-longhorn-member
|
||||
with:
|
||||
username: ${{ github.event.issue.user.login }}
|
||||
organization: longhorn
|
||||
GITHUB_TOKEN: ${{ secrets.CUSTOM_GITHUB_TOKEN }}
|
||||
- name: Add To Community Project
|
||||
if: fromJSON(steps.is-longhorn-member.outputs.teams)[0] == null
|
||||
uses: actions/add-to-project@v0.3.0
|
||||
with:
|
||||
project-url: https://github.com/orgs/longhorn/projects/5
|
||||
github-token: ${{ secrets.CUSTOM_GITHUB_TOKEN }}
|
||||
|
||||
qa:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Is Longhorn Member
|
||||
uses: tspascoal/get-user-teams-membership@v1.0.4
|
||||
id: is-longhorn-member
|
||||
with:
|
||||
username: ${{ github.event.issue.user.login }}
|
||||
organization: longhorn
|
||||
GITHUB_TOKEN: ${{ secrets.CUSTOM_GITHUB_TOKEN }}
|
||||
- name: Add To QA & DevOps Project
|
||||
if: fromJSON(steps.is-longhorn-member.outputs.teams)[0] != null
|
||||
uses: actions/add-to-project@v0.3.0
|
||||
with:
|
||||
project-url: https://github.com/orgs/longhorn/projects/4
|
||||
github-token: ${{ secrets.CUSTOM_GITHUB_TOKEN }}
|
||||
labeled: kind/test, area/infra
|
||||
label-operator: OR
|
50
.github/workflows/close-issue.yml
vendored
Normal file
50
.github/workflows/close-issue.yml
vendored
Normal file
@ -0,0 +1,50 @@
|
||||
name: Close-Issue
|
||||
on:
|
||||
issues:
|
||||
types: [ unlabeled ]
|
||||
jobs:
|
||||
backport:
|
||||
runs-on: ubuntu-latest
|
||||
if: contains(github.event.label.name, 'backport/')
|
||||
steps:
|
||||
- name: Get Backport Version
|
||||
uses: xom9ikk/split@v1
|
||||
id: split
|
||||
with:
|
||||
string: ${{ github.event.label.name }}
|
||||
separator: /
|
||||
- name: Check if Backport Issue Exists
|
||||
uses: actions-cool/issues-helper@v3
|
||||
id: if-backport-issue-exists
|
||||
with:
|
||||
actions: 'find-issues'
|
||||
token: ${{ github.token }}
|
||||
title-includes: |
|
||||
[BACKPORT][v${{ steps.split.outputs._1 }}]${{ github.event.issue.title }}
|
||||
- name: Close Backport Issue
|
||||
if: fromJSON(steps.if-backport-issue-exists.outputs.issues)[0] != null
|
||||
uses: actions-cool/issues-helper@v3
|
||||
with:
|
||||
actions: 'close-issue'
|
||||
token: ${{ github.token }}
|
||||
issue-number: ${{ fromJSON(steps.if-backport-issue-exists.outputs.issues)[0].number }}
|
||||
|
||||
automation:
|
||||
runs-on: ubuntu-latest
|
||||
if: contains(github.event.label.name, 'require/automation-e2e')
|
||||
steps:
|
||||
- name: Check if Automation Issue Exists
|
||||
uses: actions-cool/issues-helper@v3
|
||||
id: if-automation-issue-exists
|
||||
with:
|
||||
actions: 'find-issues'
|
||||
token: ${{ github.token }}
|
||||
title-includes: |
|
||||
[TEST]${{ github.event.issue.title }}
|
||||
- name: Close Automation Test Issue
|
||||
if: fromJSON(steps.if-automation-issue-exists.outputs.issues)[0] != null
|
||||
uses: actions-cool/issues-helper@v3
|
||||
with:
|
||||
actions: 'close-issue'
|
||||
token: ${{ github.token }}
|
||||
issue-number: ${{ fromJSON(steps.if-automation-issue-exists.outputs.issues)[0].number }}
|
23
.github/workflows/codespell.yml
vendored
Normal file
23
.github/workflows/codespell.yml
vendored
Normal file
@ -0,0 +1,23 @@
|
||||
name: Codespell
|
||||
|
||||
on:
|
||||
push:
|
||||
pull_request:
|
||||
branches:
|
||||
- master
|
||||
- "v*.*.*"
|
||||
|
||||
jobs:
|
||||
codespell:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v3
|
||||
with:
|
||||
fetch-depth: 1
|
||||
- name: Check code spell
|
||||
uses: codespell-project/actions-codespell@v1
|
||||
with:
|
||||
check_filenames: true
|
||||
ignore_words_file: .codespellignore
|
||||
skip: "*/**.yaml,*/**.yml,*/**.tpl,./deploy,./dev,./scripts,./uninstall"
|
114
.github/workflows/create-issue.yml
vendored
Normal file
114
.github/workflows/create-issue.yml
vendored
Normal file
@ -0,0 +1,114 @@
|
||||
name: Create-Issue
|
||||
on:
|
||||
issues:
|
||||
types: [ labeled ]
|
||||
jobs:
|
||||
backport:
|
||||
runs-on: ubuntu-latest
|
||||
if: contains(github.event.label.name, 'backport/')
|
||||
steps:
|
||||
- name: Is Longhorn Member
|
||||
uses: tspascoal/get-user-teams-membership@v1.0.4
|
||||
id: is-longhorn-member
|
||||
with:
|
||||
username: ${{ github.actor }}
|
||||
organization: longhorn
|
||||
GITHUB_TOKEN: ${{ secrets.CUSTOM_GITHUB_TOKEN }}
|
||||
- name: Get Backport Version
|
||||
if: fromJSON(steps.is-longhorn-member.outputs.teams)[0] != null
|
||||
uses: xom9ikk/split@v1
|
||||
id: split
|
||||
with:
|
||||
string: ${{ github.event.label.name }}
|
||||
separator: /
|
||||
- name: Check if Backport Issue Exists
|
||||
if: fromJSON(steps.is-longhorn-member.outputs.teams)[0] != null
|
||||
uses: actions-cool/issues-helper@v3
|
||||
id: if-backport-issue-exists
|
||||
with:
|
||||
actions: 'find-issues'
|
||||
token: ${{ github.token }}
|
||||
issue-state: 'all'
|
||||
title-includes: |
|
||||
[BACKPORT][v${{ steps.split.outputs._1 }}]${{ github.event.issue.title }}
|
||||
- name: Get Milestone Object
|
||||
if: fromJSON(steps.is-longhorn-member.outputs.teams)[0] != null && fromJSON(steps.if-backport-issue-exists.outputs.issues)[0] == null
|
||||
uses: longhorn/bot/milestone-action@master
|
||||
id: milestone
|
||||
with:
|
||||
token: ${{ github.token }}
|
||||
repository: ${{ github.repository }}
|
||||
milestone_name: v${{ steps.split.outputs._1 }}
|
||||
- name: Get Labels
|
||||
if: fromJSON(steps.is-longhorn-member.outputs.teams)[0] != null && fromJSON(steps.if-backport-issue-exists.outputs.issues)[0] == null
|
||||
id: labels
|
||||
run: |
|
||||
RAW_LABELS="${{ join(github.event.issue.labels.*.name, ' ') }}"
|
||||
RAW_LABELS="${RAW_LABELS} kind/backport"
|
||||
echo "RAW LABELS: $RAW_LABELS"
|
||||
LABELS=$(echo "$RAW_LABELS" | sed -r 's/\s*backport\S+//g' | sed -r 's/\s*require\/auto-e2e-test//g' | xargs | sed 's/ /, /g')
|
||||
echo "LABELS: $LABELS"
|
||||
echo "labels=$LABELS" >> $GITHUB_OUTPUT
|
||||
- name: Create Backport Issue
|
||||
if: fromJSON(steps.is-longhorn-member.outputs.teams)[0] != null && fromJSON(steps.if-backport-issue-exists.outputs.issues)[0] == null
|
||||
uses: dacbd/create-issue-action@v1
|
||||
id: new-issue
|
||||
with:
|
||||
token: ${{ github.token }}
|
||||
title: |
|
||||
[BACKPORT][v${{ steps.split.outputs._1 }}]${{ github.event.issue.title }}
|
||||
body: |
|
||||
backport ${{ github.event.issue.html_url }}
|
||||
labels: ${{ steps.labels.outputs.labels }}
|
||||
milestone: ${{ fromJSON(steps.milestone.outputs.data).number }}
|
||||
assignees: ${{ join(github.event.issue.assignees.*.login, ', ') }}
|
||||
- name: Get Repo Id
|
||||
if: fromJSON(steps.is-longhorn-member.outputs.teams)[0] != null && fromJSON(steps.if-backport-issue-exists.outputs.issues)[0] == null
|
||||
uses: octokit/request-action@v2.x
|
||||
id: repo
|
||||
with:
|
||||
route: GET /repos/${{ github.repository }}
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ github.token }}
|
||||
- name: Add Backport Issue To Release
|
||||
if: fromJSON(steps.is-longhorn-member.outputs.teams)[0] != null && fromJSON(steps.if-backport-issue-exists.outputs.issues)[0] == null
|
||||
uses: longhorn/bot/add-zenhub-release-action@master
|
||||
with:
|
||||
zenhub_token: ${{ secrets.ZENHUB_TOKEN }}
|
||||
repo_id: ${{ fromJSON(steps.repo.outputs.data).id }}
|
||||
issue_number: ${{ steps.new-issue.outputs.number }}
|
||||
release_name: ${{ steps.split.outputs._1 }}
|
||||
|
||||
automation:
|
||||
runs-on: ubuntu-latest
|
||||
if: contains(github.event.label.name, 'require/auto-e2e-test')
|
||||
steps:
|
||||
- name: Is Longhorn Member
|
||||
uses: tspascoal/get-user-teams-membership@v1.0.4
|
||||
id: is-longhorn-member
|
||||
with:
|
||||
username: ${{ github.actor }}
|
||||
organization: longhorn
|
||||
GITHUB_TOKEN: ${{ secrets.CUSTOM_GITHUB_TOKEN }}
|
||||
- name: Check if Automation Issue Exists
|
||||
if: fromJSON(steps.is-longhorn-member.outputs.teams)[0] != null
|
||||
uses: actions-cool/issues-helper@v3
|
||||
id: if-automation-issue-exists
|
||||
with:
|
||||
actions: 'find-issues'
|
||||
token: ${{ github.token }}
|
||||
issue-state: 'all'
|
||||
title-includes: |
|
||||
[TEST]${{ github.event.issue.title }}
|
||||
- name: Create Automation Test Issue
|
||||
if: fromJSON(steps.is-longhorn-member.outputs.teams)[0] != null && fromJSON(steps.if-automation-issue-exists.outputs.issues)[0] == null
|
||||
uses: dacbd/create-issue-action@v1
|
||||
with:
|
||||
token: ${{ github.token }}
|
||||
title: |
|
||||
[TEST]${{ github.event.issue.title }}
|
||||
body: |
|
||||
adding/updating auto e2e test cases for ${{ github.event.issue.html_url }} if they can be automated
|
||||
|
||||
cc @longhorn/qa
|
||||
labels: kind/test
|
28
.github/workflows/stale.yaml
vendored
Normal file
28
.github/workflows/stale.yaml
vendored
Normal file
@ -0,0 +1,28 @@
|
||||
name: 'Close stale issues and PRs'
|
||||
|
||||
on:
|
||||
workflow_call:
|
||||
workflow_dispatch:
|
||||
schedule:
|
||||
- cron: '30 1 * * *'
|
||||
|
||||
jobs:
|
||||
stale:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/stale@v4
|
||||
with:
|
||||
stale-issue-message: 'This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.'
|
||||
stale-pr-message: 'This PR is stale because it has been open 45 days with no activity. Remove stale label or comment or this will be closed in 10 days.'
|
||||
close-issue-message: 'This issue was closed because it has been stalled for 5 days with no activity.'
|
||||
close-pr-message: 'This PR was closed because it has been stalled for 10 days with no activity.'
|
||||
days-before-stale: 30
|
||||
days-before-pr-stale: 45
|
||||
days-before-close: 5
|
||||
days-before-pr-close: 10
|
||||
stale-issue-label: 'stale'
|
||||
stale-pr-label: 'stale'
|
||||
exempt-all-assignees: true
|
||||
exempt-issue-labels: 'kind/bug,kind/doc,kind/enhancement,kind/poc,kind/refactoring,kind/test,kind/task,kind/backport,kind/regression,kind/evaluation'
|
||||
exempt-draft-pr: true
|
||||
exempt-all-milestones: true
|
27
.github/workflows/stale.yml
vendored
27
.github/workflows/stale.yml
vendored
@ -1,27 +0,0 @@
|
||||
name: 'Close stale issues and PRs'
|
||||
on:
|
||||
workflow_call:
|
||||
workflow_dispatch:
|
||||
schedule:
|
||||
- cron: '30 1 * * *'
|
||||
|
||||
jobs:
|
||||
stale:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/stale@v4
|
||||
with:
|
||||
stale-issue-message: 'This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.'
|
||||
stale-pr-message: 'This PR is stale because it has been open 45 days with no activity. Remove stale label or comment or this will be closed in 10 days.'
|
||||
close-issue-message: 'This issue was closed because it has been stalled for 5 days with no activity.'
|
||||
close-pr-message: 'This PR was closed because it has been stalled for 10 days with no activity.'
|
||||
days-before-stale: 30
|
||||
days-before-pr-stale: 45
|
||||
days-before-close: 5
|
||||
days-before-pr-close: 10
|
||||
stale-issue-label: 'stale'
|
||||
stale-pr-label: 'stale'
|
||||
exempt-all-assignees: true
|
||||
exempt-issue-labels: 'kind/bug,kind/doc,kind/enhancement,kind/poc,kind/refactoring,kind/test,kind/task,kind/backport,kind/regression,kind/evaluation'
|
||||
exempt-draft-pr: true
|
||||
exempt-all-milestones: true
|
3
.gitignore
vendored
3
.gitignore
vendored
@ -2,3 +2,6 @@
|
||||
.idea
|
||||
*.iml
|
||||
*.ipr
|
||||
|
||||
# python venv for dev scripts
|
||||
.venv
|
283
CHANGELOG/CHANGELOG-1.4.0.md
Normal file
283
CHANGELOG/CHANGELOG-1.4.0.md
Normal file
@ -0,0 +1,283 @@
|
||||
## Release Note
|
||||
**v1.4.0 released!** 🎆
|
||||
|
||||
This release introduces many enhancements, improvements, and bug fixes as described below about stability, performance, data integrity, troubleshooting, and so on. Please try it and feedback. Thanks for all the contributions!
|
||||
|
||||
- [Kubernetes 1.25 Support](https://github.com/longhorn/longhorn/issues/4003) [[doc]](https://longhorn.io/docs/1.4.0/deploy/important-notes/#pod-security-policies-disabled--pod-security-admission-introduction)
|
||||
In the previous versions, Longhorn relies on Pod Security Policy (PSP) to authorize Longhorn components for privileged operations. From Kubernetes 1.25, PSP has been removed and replaced with Pod Security Admission (PSA). Longhorn v1.4.0 supports opt-in PSP enablement, so it can support Kubernetes versions with or without PSP.
|
||||
|
||||
- [ARM64 GA](https://github.com/longhorn/longhorn/issues/4206)
|
||||
ARM64 has been experimental from Longhorn v1.1.0. After receiving more user feedback and increasing testing coverage, ARM64 distribution has been stabilized with quality as per our regular regression testing, so it is qualified for general availability.
|
||||
|
||||
- [RWX GA](https://github.com/longhorn/longhorn/issues/2293) [[lep]](https://github.com/longhorn/longhorn/blob/master/enhancements/20220727-dedicated-recovery-backend-for-rwx-volume-nfs-server.md)[[doc]](https://longhorn.io/docs/1.4.0/advanced-resources/rwx-workloads/)
|
||||
RWX has been experimental from Longhorn v1.1.0, but it lacks availability support when the Longhorn Share Manager component behind becomes unavailable. Longhorn v1.4.0 supports NFS recovery backend based on Kubernetes built-in resource, ConfigMap, for recovering NFS client connection during the fail-over period. Also, the NFS client hard mode introduction will further avoid previous potential data loss. For the detail, please check the issue and enhancement proposal.
|
||||
|
||||
- [Volume Snapshot Checksum](https://github.com/longhorn/longhorn/issues/4210) [[lep]](https://github.com/longhorn/longhorn/blob/master/enhancements/20220922-snapshot-checksum-and-bit-rot-detection.md)[[doc]](https://longhorn.io/docs/1.4.0/references/settings/#snapshot-data-integrity)
|
||||
Data integrity is a continuous effort for Longhorn. In this version, Snapshot Checksum has been introduced w/ some settings to allow users to enable or disable checksum calculation with different modes.
|
||||
|
||||
- [Volume Bit-rot Protection](https://github.com/longhorn/longhorn/issues/3198) [[lep]](https://github.com/longhorn/longhorn/blob/master/enhancements/20220922-snapshot-checksum-and-bit-rot-detection.md)[[doc]](https://longhorn.io/docs/1.4.0/references/settings/#snapshot-data-integrity)
|
||||
When enabling the Volume Snapshot Checksum feature, Longhorn will periodically calculate and check the checksums of volume snapshots, find corrupted snapshots, then fix them.
|
||||
|
||||
- [Volume Replica Rebuilding Speedup](https://github.com/longhorn/longhorn/issues/4783)
|
||||
When enabling the Volume Snapshot Checksum feature, Longhorn will use the calculated snapshot checksum to avoid needless snapshot replication between nodes for improving replica rebuilding speed and resource consumption.
|
||||
|
||||
- [Volume Trim](https://github.com/longhorn/longhorn/issues/836) [[lep]](https://github.com/longhorn/longhorn/blob/master/enhancements/20221103-filesystem-trim.md)[[doc]](https://longhorn.io/docs/1.4.0/volumes-and-nodes/trim-filesystem/#trim-the-filesystem-in-a-longhorn-volume)
|
||||
Longhorn engine supports UNMAP SCSI command to reclaim space from the block volume.
|
||||
|
||||
- [Online Volume Expansion](https://github.com/longhorn/longhorn/issues/1674) [[doc]](https://longhorn.io/docs/1.4.0/volumes-and-nodes/expansion)
|
||||
Longhorn engine supports optional parameters to pass size expansion requests when updating the volume frontend to support online volume expansion and resize the filesystem via CSI node driver.
|
||||
|
||||
- [Local Volume via Data Locality Strict Mode](https://github.com/longhorn/longhorn/issues/3957) [[lep]](https://github.com/longhorn/longhorn/blob/master/enhancements/20200819-keep-a-local-replica-to-engine.md)[[doc]](https://longhorn.io/docs/1.4.0/references/settings/#default-data-locality)
|
||||
Local volume is based on a new Data Locality setting, Strict Local. It will allow users to create one replica volume staying in a consistent location, and the data transfer between the volume frontend and engine will be through a local socket instead of the TCP stack to improve performance and reduce resource consumption.
|
||||
|
||||
- [Volume Recurring Job Backup Restore](https://github.com/longhorn/longhorn/issues/2227) [[lep]](https://github.com/longhorn/longhorn/blob/master/enhancements/20201002-allow-recurring-backup-detached-volumes.md)[[doc]](https://longhorn.io/docs/1.4.0/snapshots-and-backups/backup-and-restore/restore-recurring-jobs-from-a-backup/)
|
||||
Recurring jobs binding to a volume can be backed up to the remote backup target together with the volume backup metadata. They can be restored back as well for a better operation experience.
|
||||
|
||||
- [Volume IO Metrics](https://github.com/longhorn/longhorn/issues/2406) [[doc]](https://longhorn.io/docs/1.4.0/monitoring/metrics/#volume)
|
||||
Longhorn enriches Volume metrics by providing real-time IO stats including IOPS, latency, and throughput of R/W IO. Users can set up a monotoning solution like Prometheus to monitor volume performance.
|
||||
|
||||
- [Longhorn System Backup & Restore](https://github.com/longhorn/longhorn/issues/1455) [[lep]](https://github.com/longhorn/longhorn/blob/master/enhancements/20220913-longhorn-system-backup-restore.md)[[doc]](https://longhorn.io/docs/1.4.0/advanced-resources/system-backup-restore/)
|
||||
Users can back up the longhorn system to the remote backup target. Afterward, it's able to restore back to an existing cluster in place or a new cluster for specific operational purposes.
|
||||
|
||||
- [Support Bundle Enhancement](https://github.com/longhorn/longhorn/issues/2759) [[lep]](https://github.com/longhorn/longhorn/blob/master/enhancements/20221109-support-bundle-enhancement.md)
|
||||
Longhorn introduces a new support bundle integration based on a general [support bundle kit](https://github.com/rancher/support-bundle-kit) solution. This can help us collect more complete troubleshooting info and simulate the cluster environment.
|
||||
|
||||
- [Tunable Timeout between Engine and Replica](https://github.com/longhorn/longhorn/issues/4491) [[doc]](https://longhorn.io/docs/1.4.0/references/settings/#engine-to-replica-timeout)
|
||||
In the current Longhorn versions, the default timeout between the Longhorn engine and replica is fixed without any exposed user settings. This will potentially bring some challenges for users having a low-spec infra environment. By exporting the setting configurable, it will allow users adaptively tune the stability of volume operations.
|
||||
|
||||
## Installation
|
||||
|
||||
> **Please ensure your Kubernetes cluster is at least v1.21 before installing Longhorn v1.4.0.**
|
||||
|
||||
Longhorn supports 3 installation ways including Rancher App Marketplace, Kubectl, and Helm. Follow the installation instructions [here](https://longhorn.io/docs/1.4.0/deploy/install/).
|
||||
|
||||
## Upgrade
|
||||
|
||||
> **Please ensure your Kubernetes cluster is at least v1.21 before upgrading to Longhorn v1.4.0 from v1.3.x. Only support upgrading from 1.3.x.**
|
||||
|
||||
Follow the upgrade instructions [here](https://longhorn.io/docs/1.4.0/deploy/upgrade/).
|
||||
|
||||
## Deprecation & Incompatibilities
|
||||
|
||||
- Pod Security Policy is an opt-in setting. If installing Longhorn with PSP support, need to enable it first.
|
||||
- The built-in CSI Snapshotter sidecar is upgraded to v5.0.1. The v1beta1 version of Volume Snapshot custom resource is deprecated but still supported. However, it will be removed after upgrading CSI Snapshotter to 6.1 or later versions in the future, so please start using v1 version instead before the deprecated version is removed.
|
||||
|
||||
## Known Issues after Release
|
||||
|
||||
Please follow up on [here](https://github.com/longhorn/longhorn/wiki/Outstanding-Known-Issues-of-Releases) about any outstanding issues found after this release.
|
||||
|
||||
## Highlights
|
||||
|
||||
- [FEATURE] Reclaim/Shrink space of volume ([836](https://github.com/longhorn/longhorn/issues/836)) - @yangchiu @derekbit @smallteeths @shuo-wu
|
||||
- [FEATURE] Backup/Restore Longhorn System ([1455](https://github.com/longhorn/longhorn/issues/1455)) - @c3y1huang @khushboo-rancher
|
||||
- [FEATURE] Online volume expansion ([1674](https://github.com/longhorn/longhorn/issues/1674)) - @shuo-wu @chriscchien
|
||||
- [FEATURE] Record recurring schedule in the backups and allow user choose to use it for the restored volume ([2227](https://github.com/longhorn/longhorn/issues/2227)) - @yangchiu @mantissahz
|
||||
- [FEATURE] NFS support (RWX) GA ([2293](https://github.com/longhorn/longhorn/issues/2293)) - @derekbit @chriscchien
|
||||
- [FEATURE] Support metrics for Volume IOPS, throughput and latency real time ([2406](https://github.com/longhorn/longhorn/issues/2406)) - @derekbit @roger-ryao
|
||||
- [FEATURE] Support bundle enhancement ([2759](https://github.com/longhorn/longhorn/issues/2759)) - @c3y1huang @chriscchien
|
||||
- [FEATURE] Automatic identifying of corrupted replica (bit rot detection) ([3198](https://github.com/longhorn/longhorn/issues/3198)) - @yangchiu @derekbit
|
||||
- [FEATURE] Local volume for distributed data workloads ([3957](https://github.com/longhorn/longhorn/issues/3957)) - @derekbit @chriscchien
|
||||
- [IMPROVEMENT] Support K8s 1.25 by updating removed deprecated resource versions like PodSecurityPolicy ([4003](https://github.com/longhorn/longhorn/issues/4003)) - @PhanLe1010 @chriscchien
|
||||
- [IMPROVEMENT] Faster resync time for fresh replica rebuilding ([4092](https://github.com/longhorn/longhorn/issues/4092)) - @yangchiu @derekbit
|
||||
- [FEATURE] Introduce checksum for snapshots ([4210](https://github.com/longhorn/longhorn/issues/4210)) - @derekbit @roger-ryao
|
||||
- [FEATURE] Update K8s version support and component/pkg/build dependencies ([4239](https://github.com/longhorn/longhorn/issues/4239)) - @yangchiu @PhanLe1010
|
||||
- [BUG] data corruption due to COW and block size not being aligned during rebuilding replicas ([4354](https://github.com/longhorn/longhorn/issues/4354)) - @PhanLe1010 @chriscchien
|
||||
- [IMPROVEMENT] Adjust the iSCSI timeout and the engine-to-replica timeout settings ([4491](https://github.com/longhorn/longhorn/issues/4491)) - @yangchiu @derekbit
|
||||
- [IMPROVEMENT] Using specific block size in Longhorn volume's filesystem ([4594](https://github.com/longhorn/longhorn/issues/4594)) - @derekbit @roger-ryao
|
||||
- [IMPROVEMENT] Speed up replica rebuilding by the metadata such as ctime of snapshot disk files ([4783](https://github.com/longhorn/longhorn/issues/4783)) - @yangchiu @derekbit
|
||||
|
||||
## Enhancements
|
||||
|
||||
- [FEATURE] Configure successfulJobsHistoryLimit of CronJobs ([1711](https://github.com/longhorn/longhorn/issues/1711)) - @weizhe0422 @chriscchien
|
||||
- [FEATURE] Allow customization of the cipher used by cryptsetup in volume encryption ([3353](https://github.com/longhorn/longhorn/issues/3353)) - @mantissahz @chriscchien
|
||||
- [FEATURE] New setting to limit the concurrent volume restoring from backup ([4558](https://github.com/longhorn/longhorn/issues/4558)) - @c3y1huang @chriscchien
|
||||
- [FEATURE] Make FS format options configurable in storage class ([4642](https://github.com/longhorn/longhorn/issues/4642)) - @weizhe0422 @chriscchien
|
||||
|
||||
## Improvement
|
||||
|
||||
- [IMPROVEMENT] Change the script into a docker run command mentioned in 'recovery from longhorn backup without system installed' doc ([1521](https://github.com/longhorn/longhorn/issues/1521)) - @weizhe0422 @chriscchien
|
||||
- [IMPROVEMENT] Improve 'recovery from longhorn backup without system installed' doc. ([1522](https://github.com/longhorn/longhorn/issues/1522)) - @weizhe0422 @roger-ryao
|
||||
- [IMPROVEMENT] Dump NFS ganesha logs to pod stdout ([2380](https://github.com/longhorn/longhorn/issues/2380)) - @weizhe0422 @roger-ryao
|
||||
- [IMPROVEMENT] Support failed/obsolete orphaned backup cleanup ([3898](https://github.com/longhorn/longhorn/issues/3898)) - @mantissahz @chriscchien
|
||||
- [IMPROVEMENT] liveness and readiness probes with longhorn csi plugin daemonset ([3907](https://github.com/longhorn/longhorn/issues/3907)) - @c3y1huang @roger-ryao
|
||||
- [IMPROVEMENT] Longhorn doesn't reuse failed replica on a disk with full allocated space ([3921](https://github.com/longhorn/longhorn/issues/3921)) - @PhanLe1010 @chriscchien
|
||||
- [IMPROVEMENT] Reduce syscalls while reading and writing requests in longhorn-engine (engine <-> replica) ([4122](https://github.com/longhorn/longhorn/issues/4122)) - @yangchiu @derekbit
|
||||
- [IMPROVEMENT] Reduce read and write calls in liblonghorn (tgt <-> engine) ([4133](https://github.com/longhorn/longhorn/issues/4133)) - @derekbit
|
||||
- [IMPROVEMENT] Replace the GCC allocator in liblonghorn with a more efficient memory allocator ([4136](https://github.com/longhorn/longhorn/issues/4136)) - @yangchiu @derekbit
|
||||
- [DOC] Update Helm readme and document ([4175](https://github.com/longhorn/longhorn/issues/4175)) - @derekbit
|
||||
- [IMPROVEMENT] Purging a volume before rebuilding starts ([4183](https://github.com/longhorn/longhorn/issues/4183)) - @yangchiu @shuo-wu
|
||||
- [IMPROVEMENT] Schedule volumes based on available disk space ([4185](https://github.com/longhorn/longhorn/issues/4185)) - @yangchiu @c3y1huang
|
||||
- [IMPROVEMENT] Recognize default toleration and node selector to allow Longhorn run on the RKE mixed cluster ([4246](https://github.com/longhorn/longhorn/issues/4246)) - @c3y1huang @chriscchien
|
||||
- [IMPROVEMENT] Support bundle doesn't collect the snapshot yamls ([4285](https://github.com/longhorn/longhorn/issues/4285)) - @yangchiu @PhanLe1010
|
||||
- [IMPROVEMENT] Avoid accidentally deleting engine images that are still in use ([4332](https://github.com/longhorn/longhorn/issues/4332)) - @derekbit @chriscchien
|
||||
- [IMPROVEMENT] Show non-JSON error from backup store ([4336](https://github.com/longhorn/longhorn/issues/4336)) - @c3y1huang
|
||||
- [IMPROVEMENT] Update nfs-ganesha to v4.0 ([4351](https://github.com/longhorn/longhorn/issues/4351)) - @derekbit
|
||||
- [IMPROVEMENT] show error when failed to init frontend ([4362](https://github.com/longhorn/longhorn/issues/4362)) - @c3y1huang
|
||||
- [IMPROVEMENT] Too many debug-level log messages in engine instance-manager ([4427](https://github.com/longhorn/longhorn/issues/4427)) - @derekbit @chriscchien
|
||||
- [IMPROVEMENT] Add prep work for fixing the corrupted filesystem using fsck in KB ([4440](https://github.com/longhorn/longhorn/issues/4440)) - @derekbit
|
||||
- [IMPROVEMENT] Prevent users from accidentally uninstalling Longhorn ([4509](https://github.com/longhorn/longhorn/issues/4509)) - @yangchiu @PhanLe1010
|
||||
- [IMPROVEMENT] add possibility to use nodeSelector on the storageClass ([4574](https://github.com/longhorn/longhorn/issues/4574)) - @weizhe0422 @roger-ryao
|
||||
- [IMPROVEMENT] Check if node schedulable condition is set before trying to read it ([4581](https://github.com/longhorn/longhorn/issues/4581)) - @weizhe0422 @roger-ryao
|
||||
- [IMPROVEMENT] Review/consolidate the sectorSize in replica server, replica volume, and engine ([4599](https://github.com/longhorn/longhorn/issues/4599)) - @yangchiu @derekbit
|
||||
- [IMPROVEMENT] Reorganize longhorn-manager/k8s/patches and auto-generate preserveUnknownFields field ([4600](https://github.com/longhorn/longhorn/issues/4600)) - @yangchiu @derekbit
|
||||
- [IMPROVEMENT] share-manager pod bypasses the kubernetes scheduler ([4789](https://github.com/longhorn/longhorn/issues/4789)) - @joshimoo @chriscchien
|
||||
- [IMPROVEMENT] Unify the format of returned error messages in longhorn-engine ([4828](https://github.com/longhorn/longhorn/issues/4828)) - @derekbit
|
||||
- [IMPROVEMENT] Longhorn system backup/restore UI ([4855](https://github.com/longhorn/longhorn/issues/4855)) - @smallteeths
|
||||
- [IMPROVEMENT] Replace the modTime (mtime) with ctime in snapshot hash ([4934](https://github.com/longhorn/longhorn/issues/4934)) - @derekbit @chriscchien
|
||||
- [BUG] volume is stuck in attaching/detaching loop with error `Failed to init frontend: device...` ([4959](https://github.com/longhorn/longhorn/issues/4959)) - @derekbit @PhanLe1010 @chriscchien
|
||||
- [IMPROVEMENT] Affinity in the longhorn-ui deployment within the helm chart ([4987](https://github.com/longhorn/longhorn/issues/4987)) - @mantissahz @chriscchien
|
||||
- [IMPROVEMENT] Allow users to change volume.spec.snapshotDataIntegrity on UI ([4994](https://github.com/longhorn/longhorn/issues/4994)) - @yangchiu @smallteeths
|
||||
- [IMPROVEMENT] Backup and restore recurring jobs on UI ([5009](https://github.com/longhorn/longhorn/issues/5009)) - @smallteeths @chriscchien
|
||||
- [IMPROVEMENT] Disable `Automatically Delete Workload Pod when The Volume Is Detached Unexpectedly` for RWX volumes ([5017](https://github.com/longhorn/longhorn/issues/5017)) - @derekbit @chriscchien
|
||||
- [IMPROVEMENT] Enable fast replica rebuilding by default ([5023](https://github.com/longhorn/longhorn/issues/5023)) - @derekbit @roger-ryao
|
||||
- [IMPROVEMENT] Upgrade tcmalloc in longhorn-engine ([5050](https://github.com/longhorn/longhorn/issues/5050)) - @derekbit
|
||||
- [IMPROVEMENT] UI show error when backup target is empty for system backup ([5056](https://github.com/longhorn/longhorn/issues/5056)) - @smallteeths @khushboo-rancher
|
||||
- [IMPROVEMENT] System restore job name should be Longhorn prefixed ([5057](https://github.com/longhorn/longhorn/issues/5057)) - @c3y1huang @khushboo-rancher
|
||||
- [BUG] Error in logs while restoring the system backup ([5061](https://github.com/longhorn/longhorn/issues/5061)) - @c3y1huang @chriscchien
|
||||
- [IMPROVEMENT] Add warning message to when deleting the restoring backups ([5065](https://github.com/longhorn/longhorn/issues/5065)) - @smallteeths @khushboo-rancher @roger-ryao
|
||||
- [IMPROVEMENT] Inconsistent name convention across volume backup restore and system backup restore ([5066](https://github.com/longhorn/longhorn/issues/5066)) - @smallteeths @roger-ryao
|
||||
- [IMPROVEMENT] System restore should proceed to restore other volumes if restoring one volume keeps failing for a certain time. ([5086](https://github.com/longhorn/longhorn/issues/5086)) - @c3y1huang @khushboo-rancher @roger-ryao
|
||||
- [IMPROVEMENT] Support customized number of replicas of webhook and recovery-backend ([5087](https://github.com/longhorn/longhorn/issues/5087)) - @derekbit @chriscchien
|
||||
- [IMPROVEMENT] Simplify the page by placing some configuration items in the advanced configuration when creating the volume ([5090](https://github.com/longhorn/longhorn/issues/5090)) - @yangchiu @smallteeths
|
||||
- [IMPROVEMENT] Support replica sync client timeout setting to stabilize replica rebuilding ([5110](https://github.com/longhorn/longhorn/issues/5110)) - @derekbit @chriscchien
|
||||
- [IMPROVEMENT] Set a newly created volume's data integrity from UI to `ignored` rather than `Fast-Check`. ([5126](https://github.com/longhorn/longhorn/issues/5126)) - @yangchiu @smallteeths
|
||||
|
||||
## Performance
|
||||
|
||||
- [BUG] Turn a node down and up, workload takes longer time to come back online in Longhorn v1.2.0 ([2947](https://github.com/longhorn/longhorn/issues/2947)) - @yangchiu @PhanLe1010
|
||||
- [TASK] RWX volume performance measurement and investigation ([3665](https://github.com/longhorn/longhorn/issues/3665)) - @derekbit
|
||||
- [TASK] Verify spinning disk/HDD via the current e2e regression ([4182](https://github.com/longhorn/longhorn/issues/4182)) - @yangchiu
|
||||
- [BUG] test_csi_snapshot_snap_create_volume_from_snapshot failed when using HDD as Longhorn disks ([4227](https://github.com/longhorn/longhorn/issues/4227)) - @yangchiu @PhanLe1010
|
||||
- [TASK] Disable tcmalloc in data path because newer tcmalloc version leads to performance drop ([5096](https://github.com/longhorn/longhorn/issues/5096)) - @derekbit @chriscchien
|
||||
|
||||
## Stability
|
||||
|
||||
- [BUG] Longhorn won't fail all replicas if there is no valid backend during the engine starting stage ([1330](https://github.com/longhorn/longhorn/issues/1330)) - @derekbit @roger-ryao
|
||||
- [BUG] Every other backup fails and crashes the volume (Segmentation Fault) ([1768](https://github.com/longhorn/longhorn/issues/1768)) - @olljanat @mantissahz
|
||||
- [BUG] Backend sizes do not match 5368709120 != 10737418240 in the engine initiation phase ([3601](https://github.com/longhorn/longhorn/issues/3601)) - @derekbit @chriscchien
|
||||
- [BUG] Somehow the Rebuilding field inside volume.meta is set to true causing the volume to stuck in attaching/detaching loop ([4212](https://github.com/longhorn/longhorn/issues/4212)) - @yangchiu @derekbit
|
||||
- [BUG] Engine binary cannot be recovered after being removed accidentally ([4380](https://github.com/longhorn/longhorn/issues/4380)) - @yangchiu @c3y1huang
|
||||
- [TASK] Disable tcmalloc in longhorn-engine and longhorn-instance-manager ([5068](https://github.com/longhorn/longhorn/issues/5068)) - @derekbit
|
||||
|
||||
## Bugs
|
||||
|
||||
- [BUG] Removing old instance records after the new IM pod is launched will take 1 minute ([1363](https://github.com/longhorn/longhorn/issues/1363)) - @mantissahz
|
||||
- [BUG] Restoring volume stuck forever if the backup is already deleted. ([1867](https://github.com/longhorn/longhorn/issues/1867)) - @mantissahz @chriscchien
|
||||
- [BUG] Duplicated default instance manager leads to engine/replica cannot be started ([3000](https://github.com/longhorn/longhorn/issues/3000)) - @PhanLe1010 @roger-ryao
|
||||
- [BUG] Restore from backup sometimes failed if having high frequent recurring backup job w/ retention ([3055](https://github.com/longhorn/longhorn/issues/3055)) - @mantissahz @roger-ryao
|
||||
- [BUG] Newly created backup stays in `InProgress` when the volume deleted before backup finished ([3122](https://github.com/longhorn/longhorn/issues/3122)) - @mantissahz @chriscchien
|
||||
- [Bug] Degraded volume generate failed replica make volume unschedulable ([3220](https://github.com/longhorn/longhorn/issues/3220)) - @derekbit @chriscchien
|
||||
- [BUG] The default access mode of a restored RWX volume is RWO ([3444](https://github.com/longhorn/longhorn/issues/3444)) - @weizhe0422 @roger-ryao
|
||||
- [BUG] Replica rebuilding failure with error "Replica must be closed, Can not add in state: open" ([3828](https://github.com/longhorn/longhorn/issues/3828)) - @mantissahz @roger-ryao
|
||||
- [BUG] Max length of volume name not consist between frontend and backend ([3917](https://github.com/longhorn/longhorn/issues/3917)) - @weizhe0422 @roger-ryao
|
||||
- [BUG] Can't delete volumesnapshot if backup removed first ([4107](https://github.com/longhorn/longhorn/issues/4107)) - @weizhe0422 @chriscchien
|
||||
- [BUG] A IM-proxy connection not closed in full regression 1.3 ([4113](https://github.com/longhorn/longhorn/issues/4113)) - @c3y1huang @chriscchien
|
||||
- [BUG] Scale replica warning ([4120](https://github.com/longhorn/longhorn/issues/4120)) - @c3y1huang @chriscchien
|
||||
- [BUG] Wrong nodeOrDiskEvicted collected in node monitor ([4143](https://github.com/longhorn/longhorn/issues/4143)) - @yangchiu @derekbit
|
||||
- [BUG] Misleading log "BUG: replica is running but storage IP is empty" ([4153](https://github.com/longhorn/longhorn/issues/4153)) - @shuo-wu @chriscchien
|
||||
- [BUG] longhorn-manager cannot start while upgrading if the configmap contains volume sensitive settings ([4160](https://github.com/longhorn/longhorn/issues/4160)) - @derekbit @chriscchien
|
||||
- [BUG] Replica stuck in buggy state with status.currentState is error and the spec.desireState is running ([4197](https://github.com/longhorn/longhorn/issues/4197)) - @yangchiu @PhanLe1010
|
||||
- [BUG] After updating longhorn to version 1.3.0, only 1 node had problems and I can't even delete it ([4213](https://github.com/longhorn/longhorn/issues/4213)) - @derekbit @c3y1huang @chriscchien
|
||||
- [BUG] Unable to use a TTY error when running environment_check.sh ([4216](https://github.com/longhorn/longhorn/issues/4216)) - @flkdnt @chriscchien
|
||||
- [BUG] The last healthy replica may be evicted or removed ([4238](https://github.com/longhorn/longhorn/issues/4238)) - @yangchiu @shuo-wu
|
||||
- [BUG] Volume detaching and attaching repeatedly while creating multiple snapshots with a same id ([4250](https://github.com/longhorn/longhorn/issues/4250)) - @yangchiu @derekbit
|
||||
- [BUG] Backing image is not deleted and recreated correctly ([4256](https://github.com/longhorn/longhorn/issues/4256)) - @shuo-wu @chriscchien
|
||||
- [BUG] longhorn-ui fails to start on RKE2 with cis-1.6 profile for Longhorn v1.3.0 with helm install ([4266](https://github.com/longhorn/longhorn/issues/4266)) - @yangchiu @mantissahz
|
||||
- [BUG] Longhorn volume stuck in deleting state ([4278](https://github.com/longhorn/longhorn/issues/4278)) - @yangchiu @PhanLe1010
|
||||
- [BUG] the IP address is duplicate when using storage network and the second network is contronllerd by ovs-cni. ([4281](https://github.com/longhorn/longhorn/issues/4281)) - @mantissahz
|
||||
- [BUG] build longhorn-ui image error ([4283](https://github.com/longhorn/longhorn/issues/4283)) - @smallteeths
|
||||
- [BUG] Wrong conditions in the Chart default-setting manifest for Rancher deployed Windows Cluster feature ([4289](https://github.com/longhorn/longhorn/issues/4289)) - @derekbit @chriscchien
|
||||
- [BUG] Volume operations/rebuilding error during eviction ([4294](https://github.com/longhorn/longhorn/issues/4294)) - @yangchiu @shuo-wu
|
||||
- [BUG] longhorn-manager deletes same pod multi times when rebooting ([4302](https://github.com/longhorn/longhorn/issues/4302)) - @mantissahz @w13915984028
|
||||
- [BUG] test_setting_backing_image_auto_cleanup failed because the backing image file isn't deleted on the corresponding node as expected ([4308](https://github.com/longhorn/longhorn/issues/4308)) - @shuo-wu @chriscchien
|
||||
- [BUG] After automatically force delete terminating pods of deployment on down node, data lost and I/O error ([4384](https://github.com/longhorn/longhorn/issues/4384)) - @yangchiu @derekbit @PhanLe1010
|
||||
- [BUG] Volume can not attach to node when engine image DaemonSet pods are not fully deployed ([4386](https://github.com/longhorn/longhorn/issues/4386)) - @PhanLe1010 @chriscchien
|
||||
- [BUG] Error/warning during uninstallation of Longhorn v1.3.1 via manifest ([4405](https://github.com/longhorn/longhorn/issues/4405)) - @PhanLe1010 @roger-ryao
|
||||
- [BUG] can't upgrade engine if a volume was created in Longhorn v1.0 and the volume.spec.dataLocality is `""` ([4412](https://github.com/longhorn/longhorn/issues/4412)) - @derekbit @chriscchien
|
||||
- [BUG] Confusing description the label for replica delition ([4430](https://github.com/longhorn/longhorn/issues/4430)) - @yangchiu @smallteeths
|
||||
- [BUG] Update the Longhorn document in Using the Environment Check Script ([4450](https://github.com/longhorn/longhorn/issues/4450)) - @weizhe0422 @roger-ryao
|
||||
- [BUG] Unable to search 1.3.1 doc by algolia ([4457](https://github.com/longhorn/longhorn/issues/4457)) - @mantissahz @roger-ryao
|
||||
- [BUG] Misleading message "The volume is in expansion progress from size 20Gi to 10Gi" if the expansion is invalid ([4475](https://github.com/longhorn/longhorn/issues/4475)) - @yangchiu @smallteeths
|
||||
- [BUG] Flaky case test_autosalvage_with_data_locality_enabled ([4489](https://github.com/longhorn/longhorn/issues/4489)) - @weizhe0422
|
||||
- [BUG] Continuously rebuild when auto-balance==least-effort and existing node becomes unschedulable ([4502](https://github.com/longhorn/longhorn/issues/4502)) - @yangchiu @c3y1huang
|
||||
- [BUG] Inconsistent system snapshots between replicas after rebuilding ([4513](https://github.com/longhorn/longhorn/issues/4513)) - @derekbit
|
||||
- [BUG] Prometheus metric for backup state (longhorn_backup_state) returns wrong values ([4521](https://github.com/longhorn/longhorn/issues/4521)) - @mantissahz @roger-ryao
|
||||
- [BUG] Longhorn accidentally schedule all replicas onto a worker node even though the setting Replica Node Level Soft Anti-Affinity is currently disabled ([4546](https://github.com/longhorn/longhorn/issues/4546)) - @yangchiu @mantissahz
|
||||
- [BUG] LH continuously reports `invalid customized default setting taint-toleration` ([4554](https://github.com/longhorn/longhorn/issues/4554)) - @weizhe0422 @roger-ryao
|
||||
- [BUG] the values.yaml in the longhorn helm chart contains values not used. ([4601](https://github.com/longhorn/longhorn/issues/4601)) - @weizhe0422 @roger-ryao
|
||||
- [BUG] longhorn-engine integration test test_restore_to_file_with_backing_file failed after upgrade to sles 15.4 ([4632](https://github.com/longhorn/longhorn/issues/4632)) - @mantissahz
|
||||
- [BUG] Can not pull a backup created by another Longhorn system from the remote backup target ([4637](https://github.com/longhorn/longhorn/issues/4637)) - @yangchiu @mantissahz @roger-ryao
|
||||
- [BUG] Fix the share-manager deletion failure if the confimap is not existing ([4648](https://github.com/longhorn/longhorn/issues/4648)) - @derekbit @roger-ryao
|
||||
- [BUG] Updating volume-scheduling-error failure for RWX volumes and expanding volumes ([4654](https://github.com/longhorn/longhorn/issues/4654)) - @derekbit @chriscchien
|
||||
- [BUG] charts/longhorn/questions.yaml include oudated csi-image tags ([4669](https://github.com/longhorn/longhorn/issues/4669)) - @PhanLe1010 @roger-ryao
|
||||
- [BUG] rebuilding the replica failed after upgrading from 1.2.4 to 1.3.2-rc2 ([4705](https://github.com/longhorn/longhorn/issues/4705)) - @derekbit @chriscchien
|
||||
- [BUG] Cannot re-run helm uninstallation if the first one failed and cannot fetch logs of failed uninstallation pod ([4711](https://github.com/longhorn/longhorn/issues/4711)) - @yangchiu @PhanLe1010 @roger-ryao
|
||||
- [BUG] The old instance-manager-r Pods are not deleted after upgrade ([4726](https://github.com/longhorn/longhorn/issues/4726)) - @mantissahz @chriscchien
|
||||
- [BUG] Replica Auto Balance repeatedly delete the local replica and trigger rebuilding ([4761](https://github.com/longhorn/longhorn/issues/4761)) - @c3y1huang @roger-ryao
|
||||
- [BUG] Volume metafile getting deleted or empty results in a detach-attach loop ([4846](https://github.com/longhorn/longhorn/issues/4846)) - @mantissahz @chriscchien
|
||||
- [BUG] Backing image is stuck at `in-progress` status if the provided checksum is incorrect ([4852](https://github.com/longhorn/longhorn/issues/4852)) - @FrankYang0529 @chriscchien
|
||||
- [BUG] Duplicate channel close error in the backing image manage related components ([4865](https://github.com/longhorn/longhorn/issues/4865)) - @weizhe0422 @roger-ryao
|
||||
- [BUG] The node ID of backing image data source somehow get changed then lead to file handling failed ([4887](https://github.com/longhorn/longhorn/issues/4887)) - @shuo-wu @chriscchien
|
||||
- [BUG] Cannot upload a backing image larger than 10G ([4902](https://github.com/longhorn/longhorn/issues/4902)) - @smallteeths @shuo-wu @chriscchien
|
||||
- [BUG] Failed to build longhorn-instance-manager master branch ([4946](https://github.com/longhorn/longhorn/issues/4946)) - @derekbit
|
||||
- [BUG] PVC only works with plural annotation `volumes.kubernetes.io/storage-provisioner: driver.longhorn.io` ([4951](https://github.com/longhorn/longhorn/issues/4951)) - @weizhe0422
|
||||
- [BUG] Failed to create a replenished replica process because of the newly adding option ([4962](https://github.com/longhorn/longhorn/issues/4962)) - @yangchiu @derekbit
|
||||
- [BUG] Incorrect log messages in longhorn-engine processRemoveSnapshot() ([4980](https://github.com/longhorn/longhorn/issues/4980)) - @derekbit
|
||||
- [BUG] System backup showing wrong age ([5047](https://github.com/longhorn/longhorn/issues/5047)) - @smallteeths @khushboo-rancher
|
||||
- [BUG] System backup should validate empty backup target ([5055](https://github.com/longhorn/longhorn/issues/5055)) - @c3y1huang @khushboo-rancher
|
||||
- [BUG] missing the `restoreVolumeRecurringJob` parameter in the VolumeGet API ([5062](https://github.com/longhorn/longhorn/issues/5062)) - @mantissahz @roger-ryao
|
||||
- [BUG] System restore stuck in restoring if pvc exists with identical name ([5064](https://github.com/longhorn/longhorn/issues/5064)) - @c3y1huang @roger-ryao
|
||||
- [BUG] No error shown on UI if system backup conf not available ([5072](https://github.com/longhorn/longhorn/issues/5072)) - @c3y1huang @khushboo-rancher
|
||||
- [BUG] System restore missing services ([5074](https://github.com/longhorn/longhorn/issues/5074)) - @yangchiu @c3y1huang
|
||||
- [BUG] In a system restore, PV & PVC are not restored if PVC was created with 'longhorn-static' (created via Longhorn GUI) ([5091](https://github.com/longhorn/longhorn/issues/5091)) - @c3y1huang @khushboo-rancher
|
||||
- [BUG][v1.4.0-rc1] image security scan CRITICAL issues ([5107](https://github.com/longhorn/longhorn/issues/5107)) - @yangchiu @mantissahz
|
||||
- [BUG] Snapshot trim wrong label in the volume detail page. ([5127](https://github.com/longhorn/longhorn/issues/5127)) - @smallteeths @chriscchien
|
||||
- [BUG] Filesystem on the volume with a backing image is corrupted after applying trim operation ([5129](https://github.com/longhorn/longhorn/issues/5129)) - @derekbit @chriscchien
|
||||
- [BUG] Error in uninstall job ([5132](https://github.com/longhorn/longhorn/issues/5132)) - @c3y1huang @chriscchien
|
||||
- [BUG] Uninstall job unable to delete the systembackup and systemrestore cr. ([5133](https://github.com/longhorn/longhorn/issues/5133)) - @c3y1huang @chriscchien
|
||||
- [BUG] Nil pointer dereference error on restoring the system backup ([5134](https://github.com/longhorn/longhorn/issues/5134)) - @yangchiu @c3y1huang
|
||||
- [BUG] UI option Update Replicas Auto Balance should use capital letter like others ([5154](https://github.com/longhorn/longhorn/issues/5154)) - @smallteeths @chriscchien
|
||||
- [BUG] System restore cannot roll out when volume name is different to the PV ([5157](https://github.com/longhorn/longhorn/issues/5157)) - @yangchiu @c3y1huang
|
||||
- [BUG] Online expansion doesn't succeed after a failed expansion ([5169](https://github.com/longhorn/longhorn/issues/5169)) - @derekbit @shuo-wu @khushboo-rancher
|
||||
|
||||
## Misc
|
||||
|
||||
- [DOC] RWX support for NVIDIA JETSON Ubuntu 18.4LTS kernel requires enabling NFSV4.1 ([3157](https://github.com/longhorn/longhorn/issues/3157)) - @yangchiu @derekbit
|
||||
- [DOC] Add information about encryption algorithm to documentation ([3285](https://github.com/longhorn/longhorn/issues/3285)) - @mantissahz
|
||||
- [DOC] Update the doc of volume size after introducing snapshot prune ([4158](https://github.com/longhorn/longhorn/issues/4158)) - @shuo-wu
|
||||
- [Doc] Update the outdated "Customizing Default Settings" document ([4174](https://github.com/longhorn/longhorn/issues/4174)) - @derekbit
|
||||
- [TASK] Refresh distro version support for 1.4 ([4401](https://github.com/longhorn/longhorn/issues/4401)) - @weizhe0422
|
||||
- [TASK] Update official document Longhorn Networking ([4478](https://github.com/longhorn/longhorn/issues/4478)) - @derekbit
|
||||
- [TASK] Update preserveUnknownFields fields in longhorn-manager CRD manifest ([4505](https://github.com/longhorn/longhorn/issues/4505)) - @derekbit @roger-ryao
|
||||
- [TASK] Disable doc search for archived versions < 1.1 ([4524](https://github.com/longhorn/longhorn/issues/4524)) - @mantissahz
|
||||
- [TASK] Update longhorn components with the latest backupstore ([4552](https://github.com/longhorn/longhorn/issues/4552)) - @derekbit
|
||||
- [TASK] Update base image of all components from BCI 15.3 to 15.4 ([4617](https://github.com/longhorn/longhorn/issues/4617)) - @yangchiu
|
||||
- [DOC] Update the Longhorn document in Install with Helm ([4745](https://github.com/longhorn/longhorn/issues/4745)) - @roger-ryao
|
||||
- [TASK] Create longhornio support-bundle-kit image ([4911](https://github.com/longhorn/longhorn/issues/4911)) - @yangchiu
|
||||
- [DOC] Add Recurring * Jobs History Limit to setting reference ([4912](https://github.com/longhorn/longhorn/issues/4912)) - @weizhe0422 @roger-ryao
|
||||
- [DOC] Add Failed Backup TTL to setting reference ([4913](https://github.com/longhorn/longhorn/issues/4913)) - @mantissahz
|
||||
- [TASK] Create longhornio liveness probe image ([4945](https://github.com/longhorn/longhorn/issues/4945)) - @yangchiu
|
||||
- [TASK] Make system managed components branch-based build ([5024](https://github.com/longhorn/longhorn/issues/5024)) - @yangchiu
|
||||
- [TASK] Remove unstable s390x from PR check for all repos ([5040](https://github.com/longhorn/longhorn/issues/5040)) -
|
||||
- [TASK] Update longhorn-share-manager's nfs-ganesha to V4.2.1 ([5083](https://github.com/longhorn/longhorn/issues/5083)) - @derekbit @mantissahz
|
||||
- [DOC] Update the Longhorn document in Setting up Prometheus and Grafana ([5158](https://github.com/longhorn/longhorn/issues/5158)) - @roger-ryao
|
||||
|
||||
## Contributors
|
||||
|
||||
- @FrankYang0529
|
||||
- @PhanLe1010
|
||||
- @c3y1huang
|
||||
- @chriscchien
|
||||
- @derekbit
|
||||
- @flkdnt
|
||||
- @innobead
|
||||
- @joshimoo
|
||||
- @khushboo-rancher
|
||||
- @mantissahz
|
||||
- @olljanat
|
||||
- @roger-ryao
|
||||
- @shuo-wu
|
||||
- @smallteeths
|
||||
- @w13915984028
|
||||
- @weizhe0422
|
||||
- @yangchiu
|
88
CHANGELOG/CHANGELOG-1.4.1.md
Normal file
88
CHANGELOG/CHANGELOG-1.4.1.md
Normal file
@ -0,0 +1,88 @@
|
||||
## Release Note
|
||||
**v1.4.1 released!** 🎆
|
||||
|
||||
This release introduces improvements and bug fixes as described below about stability, performance, space efficiency, resilience, and so on. Please try it and feedback. Thanks for all the contributions!
|
||||
|
||||
## Installation
|
||||
|
||||
> **Please ensure your Kubernetes cluster is at least v1.21 before installing Longhorn v1.4.1.**
|
||||
|
||||
Longhorn supports 3 installation ways including Rancher App Marketplace, Kubectl, and Helm. Follow the installation instructions [here](https://longhorn.io/docs/1.4.1/deploy/install/).
|
||||
|
||||
## Upgrade
|
||||
|
||||
> **Please ensure your Kubernetes cluster is at least v1.21 before upgrading to Longhorn v1.4.1 from v1.3.x/v1.4.0, which are only supported source versions.**
|
||||
|
||||
Follow the upgrade instructions [here](https://longhorn.io/docs/1.4.1/deploy/upgrade/).
|
||||
|
||||
## Deprecation & Incompatibilities
|
||||
|
||||
N/A
|
||||
|
||||
## Known Issues after Release
|
||||
|
||||
Please follow up on [here](https://github.com/longhorn/longhorn/wiki/Outstanding-Known-Issues-of-Releases) about any outstanding issues found after this release.
|
||||
|
||||
|
||||
## Highlights
|
||||
|
||||
- [IMPROVEMENT] Periodically clean up volume snapshots ([3836](https://github.com/longhorn/longhorn/issues/3836)) - @c3y1huang @chriscchien
|
||||
|
||||
## Improvement
|
||||
|
||||
- [IMPROVEMENT] Do not count the failure replica reuse failure caused by the disconnection ([1923](https://github.com/longhorn/longhorn/issues/1923)) - @yangchiu @mantissahz
|
||||
- [IMPROVEMENT] Update uninstallation info to include the 'Deleting Confirmation Flag' in chart ([5250](https://github.com/longhorn/longhorn/issues/5250)) - @PhanLe1010 @roger-ryao
|
||||
- [IMPROVEMENT] Fix Guaranteed Engine Manager CPU recommendation formula in UI ([5338](https://github.com/longhorn/longhorn/issues/5338)) - @c3y1huang @smallteeths @roger-ryao
|
||||
- [IMPROVEMENT] Update PSP validation in the Longhorn upstream chart ([5339](https://github.com/longhorn/longhorn/issues/5339)) - @yangchiu @PhanLe1010
|
||||
- [IMPROVEMENT] Update ganesha nfs to 4.2.3 ([5356](https://github.com/longhorn/longhorn/issues/5356)) - @derekbit @roger-ryao
|
||||
- [IMPROVEMENT] Set write-cache of longhorn block device to off explicitly ([5382](https://github.com/longhorn/longhorn/issues/5382)) - @derekbit @chriscchien
|
||||
|
||||
## Stability
|
||||
|
||||
- [BUG] Memory leak in CSI plugin caused by stuck umount processes if the RWX volume is already gone ([5296](https://github.com/longhorn/longhorn/issues/5296)) - @derekbit @roger-ryao
|
||||
- [BUG] share-manager pod failed to restart after kubelet restart ([5507](https://github.com/longhorn/longhorn/issues/5507)) - @yangchiu @derekbit
|
||||
|
||||
## Bugs
|
||||
|
||||
- [BUG] Longhorn 1.3.2 fails to backup & restore volumes behind Internet proxy ([5054](https://github.com/longhorn/longhorn/issues/5054)) - @mantissahz @chriscchien
|
||||
- [BUG] RWX doesn't work with release 1.4.0 due to end grace update error from recovery backend ([5183](https://github.com/longhorn/longhorn/issues/5183)) - @derekbit @chriscchien
|
||||
- [BUG] Incorrect indentation of charts/questions.yaml ([5196](https://github.com/longhorn/longhorn/issues/5196)) - @mantissahz @roger-ryao
|
||||
- [BUG] Updating option "Allow snapshots removal during trim" for old volumes failed ([5218](https://github.com/longhorn/longhorn/issues/5218)) - @shuo-wu @roger-ryao
|
||||
- [BUG] Incorrect router retry mechanism ([5259](https://github.com/longhorn/longhorn/issues/5259)) - @mantissahz @chriscchien
|
||||
- [BUG] System Backup is stuck at Uploading if there are PVs not provisioned by CSI driver ([5286](https://github.com/longhorn/longhorn/issues/5286)) - @c3y1huang @chriscchien
|
||||
- [BUG] Sync up with backup target during DR volume activation ([5292](https://github.com/longhorn/longhorn/issues/5292)) - @yangchiu @weizhe0422
|
||||
- [BUG] environment_check.sh does not handle different kernel versions in cluster correctly ([5304](https://github.com/longhorn/longhorn/issues/5304)) - @achims311 @roger-ryao
|
||||
- [BUG] instance-manager-r high memory consumption ([5312](https://github.com/longhorn/longhorn/issues/5312)) - @derekbit @roger-ryao
|
||||
- [BUG] Replica rebuilding caused by rke2/kubelet restart ([5340](https://github.com/longhorn/longhorn/issues/5340)) - @derekbit @chriscchien
|
||||
- [BUG] Error message not consistent between create/update recurring job when retain number greater than 50 ([5434](https://github.com/longhorn/longhorn/issues/5434)) - @c3y1huang @chriscchien
|
||||
- [BUG] Do not copy Host header to API requests forwarded to Longhorn Manager ([5438](https://github.com/longhorn/longhorn/issues/5438)) - @yangchiu @smallteeths
|
||||
- [BUG] RWX Volume attachment is getting Failed ([5456](https://github.com/longhorn/longhorn/issues/5456)) - @derekbit
|
||||
- [BUG] test case test_backup_lock_deletion_during_restoration failed ([5458](https://github.com/longhorn/longhorn/issues/5458)) - @yangchiu @derekbit
|
||||
- [BUG] [master] [v1.4.1-rc1] Volume restoration will never complete if attached node is down ([5464](https://github.com/longhorn/longhorn/issues/5464)) - @derekbit @weizhe0422 @chriscchien
|
||||
- [BUG] Unable to create support bundle agent pod in air-gap environment ([5467](https://github.com/longhorn/longhorn/issues/5467)) - @yangchiu @c3y1huang
|
||||
- [BUG] Node disconnection test failed ([5476](https://github.com/longhorn/longhorn/issues/5476)) - @yangchiu @derekbit
|
||||
- [BUG] Physical node down test failed ([5477](https://github.com/longhorn/longhorn/issues/5477)) - @derekbit @chriscchien
|
||||
- [BUG] Backing image with sync failure ([5481](https://github.com/longhorn/longhorn/issues/5481)) - @ChanYiLin @roger-ryao
|
||||
- [BUG] Example of data migration doesn't work for hidden/./dot-files) ([5484](https://github.com/longhorn/longhorn/issues/5484)) - @hedefalk @shuo-wu @chriscchien
|
||||
- [BUG] test case test_dr_volume_with_backup_block_deletion failed ([5489](https://github.com/longhorn/longhorn/issues/5489)) - @yangchiu @derekbit
|
||||
|
||||
## Misc
|
||||
|
||||
- [TASK][UI] add new recurring job tasks ([5272](https://github.com/longhorn/longhorn/issues/5272)) - @smallteeths @chriscchien
|
||||
|
||||
## Contributors
|
||||
|
||||
- @ChanYiLin
|
||||
- @PhanLe1010
|
||||
- @achims311
|
||||
- @c3y1huang
|
||||
- @chriscchien
|
||||
- @derekbit
|
||||
- @hedefalk
|
||||
- @innobead
|
||||
- @mantissahz
|
||||
- @roger-ryao
|
||||
- @shuo-wu
|
||||
- @smallteeths
|
||||
- @weizhe0422
|
||||
- @yangchiu
|
92
CHANGELOG/CHANGELOG-1.4.2.md
Normal file
92
CHANGELOG/CHANGELOG-1.4.2.md
Normal file
@ -0,0 +1,92 @@
|
||||
## Release Note
|
||||
### **v1.4.2 released!** 🎆
|
||||
|
||||
Longhorn v1.4.2 is the latest stable version of Longhorn 1.4.
|
||||
It introduces improvements and bug fixes in the areas of stability, performance, space efficiency, resilience, and so on. Please try it out and provide feedback. Thanks for all the contributions!
|
||||
|
||||
> For the definition of stable or latest release, please check [here](https://github.com/longhorn/longhorn#releases).
|
||||
|
||||
## Installation
|
||||
|
||||
> **Please ensure your Kubernetes cluster is at least v1.21 before installing v1.4.2.**
|
||||
|
||||
Longhorn supports 3 installation ways including Rancher App Marketplace, Kubectl, and Helm. Follow the installation instructions [here](https://longhorn.io/docs/1.4.2/deploy/install/).
|
||||
|
||||
## Upgrade
|
||||
|
||||
> **Please read the [important notes](https://longhorn.io/docs/1.4.2/deploy/important-notes/) first and ensure your Kubernetes cluster is at least v1.21 before upgrading to Longhorn v1.4.2 from v1.3.x/v1.4.x, which are only supported source versions.**
|
||||
|
||||
Follow the upgrade instructions [here](https://longhorn.io/docs/1.4.2/deploy/upgrade/).
|
||||
|
||||
## Deprecation & Incompatibilities
|
||||
|
||||
N/A
|
||||
|
||||
## Known Issues after Release
|
||||
|
||||
Please follow up on [here](https://github.com/longhorn/longhorn/wiki/Outstanding-Known-Issues-of-Releases) about any outstanding issues found after this release.
|
||||
|
||||
|
||||
## Highlights
|
||||
|
||||
- [IMPROVEMENT] Use PDB to protect Longhorn components from unexpected drains ([3304](https://github.com/longhorn/longhorn/issues/3304)) - @yangchiu @PhanLe1010
|
||||
- [IMPROVEMENT] Introduce timeout mechanism for the sparse file syncing service ([4305](https://github.com/longhorn/longhorn/issues/4305)) - @yangchiu @ChanYiLin
|
||||
- [IMPROVEMENT] Recurring jobs create new snapshots while being not able to clean up old ones ([4898](https://github.com/longhorn/longhorn/issues/4898)) - @mantissahz @chriscchien
|
||||
|
||||
## Improvement
|
||||
|
||||
- [IMPROVEMENT] Support bundle collects dmesg, syslog and related information of longhorn nodes ([5073](https://github.com/longhorn/longhorn/issues/5073)) - @weizhe0422 @roger-ryao
|
||||
- [IMPROVEMENT] Fix BackingImage uploading/downloading flow to prevent client timeout ([5443](https://github.com/longhorn/longhorn/issues/5443)) - @ChanYiLin @chriscchien
|
||||
- [IMPROVEMENT] Create a new setting so that Longhorn removes PDB for instance-manager-r that doesn't have any running instance inside it ([5549](https://github.com/longhorn/longhorn/issues/5549)) - @PhanLe1010 @khushboo-rancher
|
||||
- [IMPROVEMENT] Deprecate the setting `allow-node-drain-with-last-healthy-replica` and replace it by `node-drain-policy` setting ([5585](https://github.com/longhorn/longhorn/issues/5585)) - @yangchiu @PhanLe1010
|
||||
- [IMPROVEMENT][UI] Recurring jobs create new snapshots while being not able to clean up old one ([5610](https://github.com/longhorn/longhorn/issues/5610)) - @mantissahz @smallteeths @roger-ryao
|
||||
- [IMPROVEMENT] Only activate replica if it doesn't have deletion timestamp during volume engine upgrade ([5632](https://github.com/longhorn/longhorn/issues/5632)) - @PhanLe1010 @roger-ryao
|
||||
- [IMPROVEMENT] Clean up backup target if the backup target setting is unset ([5655](https://github.com/longhorn/longhorn/issues/5655)) - @yangchiu @ChanYiLin
|
||||
|
||||
## Resilience
|
||||
|
||||
- [BUG] Directly mark replica as failed if the node is deleted ([5542](https://github.com/longhorn/longhorn/issues/5542)) - @weizhe0422 @roger-ryao
|
||||
- [BUG] RWX volume is stuck at detaching when the attached node is down ([5558](https://github.com/longhorn/longhorn/issues/5558)) - @derekbit @roger-ryao
|
||||
- [BUG] Backup monitor gets stuck in an infinite loop if backup isn't found ([5662](https://github.com/longhorn/longhorn/issues/5662)) - @derekbit @chriscchien
|
||||
- [BUG] Resources such as replicas are somehow not mutated when network is unstable ([5762](https://github.com/longhorn/longhorn/issues/5762)) - @derekbit @roger-ryao
|
||||
- [BUG] Instance manager may not update instance status for a minute after starting ([5809](https://github.com/longhorn/longhorn/issues/5809)) - @ejweber @chriscchien
|
||||
|
||||
## Bugs
|
||||
|
||||
- [BUG] Delete a uploading backing image, the corresponding LH temp file is not deleted ([3682](https://github.com/longhorn/longhorn/issues/3682)) - @ChanYiLin @chriscchien
|
||||
- [BUG] Can not create backup in engine image not fully deployed cluster ([5248](https://github.com/longhorn/longhorn/issues/5248)) - @ChanYiLin @roger-ryao
|
||||
- [BUG] Upgrade engine --> spec.restoreVolumeRecurringJob and spec.snapshotDataIntegrity Unsupported value ([5485](https://github.com/longhorn/longhorn/issues/5485)) - @yangchiu @derekbit
|
||||
- [BUG] Bulk backup deletion cause restoring volume to finish with attached state. ([5506](https://github.com/longhorn/longhorn/issues/5506)) - @ChanYiLin @roger-ryao
|
||||
- [BUG] volume expansion starts for no reason, gets stuck on current size > expected size ([5513](https://github.com/longhorn/longhorn/issues/5513)) - @mantissahz @roger-ryao
|
||||
- [BUG] RWX volume attachment failed if tried more enough times ([5537](https://github.com/longhorn/longhorn/issues/5537)) - @yangchiu @derekbit
|
||||
- [BUG] instance-manager-e emits `Wait for process pvc-xxxx to shutdown` constantly ([5575](https://github.com/longhorn/longhorn/issues/5575)) - @derekbit @roger-ryao
|
||||
- [BUG] Support bundle kit should respect node selector & taint toleration ([5614](https://github.com/longhorn/longhorn/issues/5614)) - @yangchiu @c3y1huang
|
||||
- [BUG] Value overlapped in page Instance Manager Image ([5622](https://github.com/longhorn/longhorn/issues/5622)) - @smallteeths @chriscchien
|
||||
- [BUG] Instance manager PDB created with wrong selector thus blocking the draining of the wrongly selected node forever ([5680](https://github.com/longhorn/longhorn/issues/5680)) - @PhanLe1010 @chriscchien
|
||||
- [BUG] During volume live engine upgrade, if the replica pod is killed, the volume is stuck in upgrading forever ([5684](https://github.com/longhorn/longhorn/issues/5684)) - @yangchiu @PhanLe1010
|
||||
- [BUG] Instance manager PDBs cannot be removed if the longhorn-manager pod on its spec node is not available ([5688](https://github.com/longhorn/longhorn/issues/5688)) - @PhanLe1010 @roger-ryao
|
||||
- [BUG] Rebuild rebuilding is possibly issued to a wrong replica ([5709](https://github.com/longhorn/longhorn/issues/5709)) - @ejweber @roger-ryao
|
||||
- [BUG] longhorn upgrade is not upgrading engineimage ([5740](https://github.com/longhorn/longhorn/issues/5740)) - @shuo-wu @chriscchien
|
||||
- [BUG] `test_replica_auto_balance_when_replica_on_unschedulable_node` Error in creating volume with nodeSelector and dataLocality parameters ([5745](https://github.com/longhorn/longhorn/issues/5745)) - @c3y1huang @roger-ryao
|
||||
- [BUG] Unable to backup volume after NFS server IP change ([5856](https://github.com/longhorn/longhorn/issues/5856)) - @derekbit @roger-ryao
|
||||
|
||||
## Misc
|
||||
|
||||
- [TASK] Check and update the networking doc & example YAMLs ([5651](https://github.com/longhorn/longhorn/issues/5651)) - @yangchiu @shuo-wu
|
||||
|
||||
## Contributors
|
||||
|
||||
- @ChanYiLin
|
||||
- @PhanLe1010
|
||||
- @c3y1huang
|
||||
- @chriscchien
|
||||
- @derekbit
|
||||
- @ejweber
|
||||
- @innobead
|
||||
- @khushboo-rancher
|
||||
- @mantissahz
|
||||
- @roger-ryao
|
||||
- @shuo-wu
|
||||
- @smallteeths
|
||||
- @weizhe0422
|
||||
- @yangchiu
|
74
CHANGELOG/CHANGELOG-1.4.3.md
Normal file
74
CHANGELOG/CHANGELOG-1.4.3.md
Normal file
@ -0,0 +1,74 @@
|
||||
## Release Note
|
||||
### **v1.4.3 released!** 🎆
|
||||
|
||||
Longhorn v1.4.3 is the latest stable version of Longhorn 1.4.
|
||||
It introduces improvements and bug fixes in the areas of stability, resilience, and so on. Please try it out and provide feedback. Thanks for all the contributions!
|
||||
|
||||
> For the definition of stable or latest release, please check [here](https://github.com/longhorn/longhorn#releases).
|
||||
|
||||
## Installation
|
||||
|
||||
> **Please ensure your Kubernetes cluster is at least v1.21 before installing v1.4.3.**
|
||||
|
||||
Longhorn supports 3 installation ways including Rancher App Marketplace, Kubectl, and Helm. Follow the installation instructions [here](https://longhorn.io/docs/1.4.3/deploy/install/).
|
||||
|
||||
## Upgrade
|
||||
|
||||
> **Please read the [important notes](https://longhorn.io/docs/1.4.3/deploy/important-notes/) first and ensure your Kubernetes cluster is at least v1.21 before upgrading to Longhorn v1.4.3 from v1.3.x/v1.4.x, which are only supported source versions.**
|
||||
|
||||
Follow the upgrade instructions [here](https://longhorn.io/docs/1.4.3/deploy/upgrade/).
|
||||
|
||||
## Deprecation & Incompatibilities
|
||||
|
||||
N/A
|
||||
|
||||
## Known Issues after Release
|
||||
|
||||
Please follow up on [here](https://github.com/longhorn/longhorn/wiki/Outstanding-Known-Issues-of-Releases) about any outstanding issues found after this release.
|
||||
|
||||
|
||||
## Improvement
|
||||
|
||||
- [IMPROVEMENT] Assign the pods to the same node where the strict-local volume is present ([5448](https://github.com/longhorn/longhorn/issues/5448)) - @c3y1huang @chriscchien
|
||||
|
||||
## Resilience
|
||||
|
||||
- [BUG] filesystem corrupted after delete instance-manager-r for a locality best-effort volume ([5801](https://github.com/longhorn/longhorn/issues/5801)) - @yangchiu @ChanYiLin @mantissahz
|
||||
|
||||
## Bugs
|
||||
|
||||
- [BUG] 'Upgrade Engine' still shows up in a specific situation when engine already upgraded ([3063](https://github.com/longhorn/longhorn/issues/3063)) - @weizhe0422 @PhanLe1010 @smallteeths
|
||||
- [BUG] DR volume even after activation remains in standby mode if there are one or more failed replicas. ([3069](https://github.com/longhorn/longhorn/issues/3069)) - @yangchiu @mantissahz
|
||||
- [BUG] Prevent Longhorn uninstallation from getting stuck due to backups in error ([5868](https://github.com/longhorn/longhorn/issues/5868)) - @ChanYiLin @mantissahz
|
||||
- [BUG] Unable to create support bundle if the previous one stayed in ReadyForDownload phase ([5882](https://github.com/longhorn/longhorn/issues/5882)) - @c3y1huang @roger-ryao
|
||||
- [BUG] share-manager for a given pvc keep restarting (other pvc are working fine) ([5954](https://github.com/longhorn/longhorn/issues/5954)) - @yangchiu @derekbit
|
||||
- [BUG] Replica auto-rebalance doesn't respect node selector ([5971](https://github.com/longhorn/longhorn/issues/5971)) - @c3y1huang @roger-ryao
|
||||
- [BUG] Extra snapshot generated when clone from a detached volume ([5986](https://github.com/longhorn/longhorn/issues/5986)) - @weizhe0422 @ejweber
|
||||
- [BUG] User created snapshot deleted after node drain and uncordon ([5992](https://github.com/longhorn/longhorn/issues/5992)) - @yangchiu @mantissahz
|
||||
- [BUG] In some specific situation, system backup auto deleted when creating another one ([6045](https://github.com/longhorn/longhorn/issues/6045)) - @c3y1huang @chriscchien
|
||||
- [BUG] Backing Image deletion stuck if it's deleted during uploading process and bids is ready-for-transfer state ([6086](https://github.com/longhorn/longhorn/issues/6086)) - @WebberHuang1118 @chriscchien
|
||||
- [BUG] Backing image manager fails when SELinux is enabled ([6108](https://github.com/longhorn/longhorn/issues/6108)) - @ejweber @chriscchien
|
||||
- [BUG] test_dr_volume_with_restore_command_error failed ([6130](https://github.com/longhorn/longhorn/issues/6130)) - @mantissahz @roger-ryao
|
||||
- [BUG] Longhorn doesn't remove the system backups crd on uninstallation ([6185](https://github.com/longhorn/longhorn/issues/6185)) - @c3y1huang @khushboo-rancher
|
||||
- [BUG] Test case test_ha_backup_deletion_recovery failed in rhel or rockylinux arm64 environment ([6213](https://github.com/longhorn/longhorn/issues/6213)) - @yangchiu @ChanYiLin @mantissahz
|
||||
- [BUG] Engine continues to attempt to rebuild replica while detaching ([6217](https://github.com/longhorn/longhorn/issues/6217)) - @yangchiu @ejweber
|
||||
- [BUG] Unable to receive support bundle from UI when it's large (400MB+) ([6256](https://github.com/longhorn/longhorn/issues/6256)) - @c3y1huang @chriscchien
|
||||
- [BUG] Migration test case failed: unable to detach volume migration is not ready yet ([6238](https://github.com/longhorn/longhorn/issues/6238)) - @yangchiu @PhanLe1010 @khushboo-rancher
|
||||
- [BUG] Restored Volumes stuck in attaching state ([6239](https://github.com/longhorn/longhorn/issues/6239)) - @derekbit @roger-ryao
|
||||
|
||||
## Contributors
|
||||
|
||||
- @ChanYiLin
|
||||
- @PhanLe1010
|
||||
- @WebberHuang1118
|
||||
- @c3y1huang
|
||||
- @chriscchien
|
||||
- @derekbit
|
||||
- @ejweber
|
||||
- @innobead
|
||||
- @khushboo-rancher
|
||||
- @mantissahz
|
||||
- @roger-ryao
|
||||
- @smallteeths
|
||||
- @weizhe0422
|
||||
- @yangchiu
|
301
CHANGELOG/CHANGELOG-1.5.0.md
Normal file
301
CHANGELOG/CHANGELOG-1.5.0.md
Normal file
@ -0,0 +1,301 @@
|
||||
## Release Note
|
||||
### **v1.5.0 released!** 🎆
|
||||
|
||||
Longhorn v1.5.0 is the latest version of Longhorn 1.5.
|
||||
It introduces many enhancements, improvements, and bug fixes as described below including performance, stability, maintenance, resilience, and so on. Please try it and feedback. Thanks for all the contributions!
|
||||
|
||||
> For the definition of stable or latest release, please check [here](https://github.com/longhorn/longhorn#releases).
|
||||
|
||||
- [v2 Data Engine based on SPDK - Preview](https://github.com/longhorn/longhorn/issues/5751)
|
||||
> **Please note that this is a preview feature, so should not be used in any production environment. A preview feature is disabled by default and would be changed in the following versions until it becomes general availability.**
|
||||
|
||||
In addition to the existing iSCSI stack (v1) data engine, we are introducing the v2 data engine based on SPDK (Storage Performance Development Kit). This release includes the introduction of volume lifecycle management, degraded volume handling, offline replica rebuilding, block device management, and orphaned replica management. For the performance benchmark and comparison with v1, check the report [here](https://longhorn.io/docs/1.5.0/spdk/performance-benchmark/).
|
||||
|
||||
- [Longhorn Volume Attachment](https://github.com/longhorn/longhorn/issues/3715)
|
||||
Introducing the new Longhorn VolumeAttachment CR, which ensures exclusive attachment and supports automatic volume attachment and detachment for various headless operations such as volume cloning, backing image export, and recurring jobs.
|
||||
|
||||
- [Cluster Autoscaler - GA](https://github.com/longhorn/longhorn/issues/5238)
|
||||
Cluster Autoscaler was initially introduced as an experimental feature in v1.3. After undergoing automatic validation on different public cloud Kubernetes distributions and receiving user feedback, it has now reached general availability.
|
||||
|
||||
- [Instance Manager Engine & Replica Consolidation](https://github.com/longhorn/longhorn/issues/5208)
|
||||
Previously, there were two separate instance manager pods responsible for volume engine and replica process management. However, this setup required high resource usage, especially during live upgrades. In this release, we have merged these pods into a single instance manager, reducing the initial resource requirements.
|
||||
|
||||
- [Volume Backup Compression Methods](https://github.com/longhorn/longhorn/issues/5189)
|
||||
Longhorn supports different compression methods for volume backups, including lz4, gzip, or no compression. This allows users to choose the most suitable method based on their data type and usage requirements.
|
||||
|
||||
- [Automatic Volume Trim Recurring Job](https://github.com/longhorn/longhorn/issues/5186)
|
||||
While volume filesystem trim was introduced in v1.4, users had to perform the operation manually. From this release, users can create a recurring job that automatically runs the trim process, improving space efficiency without requiring human intervention.
|
||||
|
||||
- [RWX Volume Trim](https://github.com/longhorn/longhorn/issues/5143)
|
||||
Longhorn supports filesystem trim for RWX (Read-Write-Many) volumes, expanding the trim functionality beyond RWO (Read-Write-Once) volumes only.
|
||||
|
||||
- [Upgrade Path Enforcement & Downgrade Prevention](https://github.com/longhorn/longhorn/issues/5131)
|
||||
To ensure compatibility after an upgrade, we have implemented upgrade path enforcement. This prevents unintended downgrades and ensures the system and data remain intact.
|
||||
|
||||
- [Backing Image Management via CSI VolumeSnapshot](https://github.com/longhorn/longhorn/issues/5005)
|
||||
Users can now utilize the unified CSI VolumeSnapshot interface to manage Backing Images similar to volume snapshots and backups.
|
||||
|
||||
- [Snapshot Cleanup & Delete Recurring Job](https://github.com/longhorn/longhorn/issues/3836)
|
||||
Introducing two new recurring job types specifically designed for snapshot cleanup and deletion. These jobs allow users to remove unnecessary snapshots for better space efficiency.
|
||||
|
||||
- [CIFS Backup Store](https://github.com/longhorn/longhorn/issues/3599) & [Azure Backup Store](https://github.com/longhorn/longhorn/issues/1309)
|
||||
To enhance users' backup strategies and align with data governance policies, Longhorn now supports additional backup storage protocols, including CIFS and Azure.
|
||||
|
||||
- [Kubernetes Upgrade Node Drain Policy](https://github.com/longhorn/longhorn/issues/3304)
|
||||
The new Node Drain Policy provides flexible strategies to protect volume data during Kubernetes upgrades or node maintenance operations. This ensures the integrity and availability of your volumes.
|
||||
|
||||
## Installation
|
||||
|
||||
> **Please ensure your Kubernetes cluster is at least v1.21 before installing Longhorn v1.5.0.**
|
||||
|
||||
Longhorn supports 3 installation ways including Rancher App Marketplace, Kubectl, and Helm. Follow the installation instructions [here](https://longhorn.io/docs/1.5.0/deploy/install/).
|
||||
|
||||
## Upgrade
|
||||
|
||||
> **Please ensure your Kubernetes cluster is at least v1.21 before upgrading to Longhorn v1.5.0 from v1.4.x. Only support upgrading from 1.4.x.**
|
||||
|
||||
Follow the upgrade instructions [here](https://longhorn.io/docs/1.5.0/deploy/upgrade/).
|
||||
|
||||
## Deprecation & Incompatibilities
|
||||
|
||||
Please check the [important notes](https://longhorn.io/docs/1.5.0/deploy/important-notes/) to know more about deprecated, removed, incompatible features and important changes. If you upgrade indirectly from an older version like v1.3.x, please also check the corresponding important note for each upgrade version path.
|
||||
|
||||
## Known Issues after Release
|
||||
|
||||
Please follow up on [here](https://github.com/longhorn/longhorn/wiki/Outstanding-Known-Issues-of-Releases) about any outstanding issues found after this release.
|
||||
|
||||
## Highlights
|
||||
|
||||
- [DOC] Provide the user guide for Kubernetes upgrade ([494](https://github.com/longhorn/longhorn/issues/494)) - @PhanLe1010
|
||||
- [FEATURE] Backups to Azure Blob Storage ([1309](https://github.com/longhorn/longhorn/issues/1309)) - @mantissahz @chriscchien
|
||||
- [IMPROVEMENT] Use PDB to protect Longhorn components from unexpected drains ([3304](https://github.com/longhorn/longhorn/issues/3304)) - @yangchiu @PhanLe1010
|
||||
- [FEATURE] CIFS Backup Store Support ([3599](https://github.com/longhorn/longhorn/issues/3599)) - @derekbit @chriscchien
|
||||
- [IMPROVEMENT] Consolidate volume attach/detach implementation ([3715](https://github.com/longhorn/longhorn/issues/3715)) - @yangchiu @PhanLe1010
|
||||
- [IMPROVEMENT] Periodically clean up volume snapshots ([3836](https://github.com/longhorn/longhorn/issues/3836)) - @c3y1huang @chriscchien
|
||||
- [IMPROVEMENT] Introduce timeout mechanism for the sparse file syncing service ([4305](https://github.com/longhorn/longhorn/issues/4305)) - @yangchiu @ChanYiLin
|
||||
- [IMPROVEMENT] Recurring jobs create new snapshots while being not able to clean up old ones ([4898](https://github.com/longhorn/longhorn/issues/4898)) - @mantissahz @chriscchien
|
||||
- [FEATURE] BackingImage Management via VolumeSnapshot ([5005](https://github.com/longhorn/longhorn/issues/5005)) - @ChanYiLin @chriscchien
|
||||
- [FEATURE] Upgrade path enforcement & downgrade prevention ([5131](https://github.com/longhorn/longhorn/issues/5131)) - @yangchiu @mantissahz
|
||||
- [FEATURE] Support RWX volume trim ([5143](https://github.com/longhorn/longhorn/issues/5143)) - @derekbit @chriscchien
|
||||
- [FEATURE] Auto Trim via recurring job ([5186](https://github.com/longhorn/longhorn/issues/5186)) - @c3y1huang @chriscchien
|
||||
- [FEATURE] Introduce faster compression and multiple threads for volume backup & restore ([5189](https://github.com/longhorn/longhorn/issues/5189)) - @derekbit @roger-ryao
|
||||
- [FEATURE] Consolidate Instance Manager Engine & Replica for resource consumption reduction ([5208](https://github.com/longhorn/longhorn/issues/5208)) - @yangchiu @c3y1huang
|
||||
- [FEATURE] Cluster Autoscaler Support GA ([5238](https://github.com/longhorn/longhorn/issues/5238)) - @yangchiu @c3y1huang
|
||||
- [FEATURE] Update K8s version support and component/pkg/build dependencies for Longhorn 1.5 ([5595](https://github.com/longhorn/longhorn/issues/5595)) - @yangchiu @ejweber
|
||||
- [FEATURE] Support SPDK Data Engine - Preview ([5751](https://github.com/longhorn/longhorn/issues/5751)) - @derekbit @shuo-wu @DamiaSan
|
||||
|
||||
## Enhancements
|
||||
|
||||
- [FEATURE] Allow users to directly activate a restoring/DR volume as long as there is one ready replica. ([1512](https://github.com/longhorn/longhorn/issues/1512)) - @mantissahz @weizhe0422
|
||||
- [REFACTOR] volume controller refactoring/split up, to simplify the control flow ([2527](https://github.com/longhorn/longhorn/issues/2527)) - @PhanLe1010 @chriscchien
|
||||
- [FEATURE] Import and export SPDK longhorn volumes to longhorn sparse file directory ([4100](https://github.com/longhorn/longhorn/issues/4100)) - @DamiaSan
|
||||
- [FEATURE] Add a global `storage reserved` setting for newly created longhorn nodes' disks ([4773](https://github.com/longhorn/longhorn/issues/4773)) - @mantissahz @chriscchien
|
||||
- [FEATURE] Support backup volumes during system backup ([5011](https://github.com/longhorn/longhorn/issues/5011)) - @c3y1huang @chriscchien
|
||||
- [FEATURE] Support SPDK lvol shallow copy for newly replica creation ([5217](https://github.com/longhorn/longhorn/issues/5217)) - @DamiaSan
|
||||
- [FEATURE] Introduce longhorn-spdk-engine for SPDK volume management ([5282](https://github.com/longhorn/longhorn/issues/5282)) - @shuo-wu
|
||||
- [FEATURE] Support replica-zone-soft-anti-affinity setting per volume ([5358](https://github.com/longhorn/longhorn/issues/5358)) - @ChanYiLin @smallteeths @chriscchien
|
||||
- [FEATURE] Install Opt-In NetworkPolicies ([5403](https://github.com/longhorn/longhorn/issues/5403)) - @yangchiu @ChanYiLin
|
||||
- [FEATURE] Create Longhorn SPDK Engine component with basic fundamental functions ([5406](https://github.com/longhorn/longhorn/issues/5406)) - @shuo-wu
|
||||
- [FEATURE] Add status APIs for shallow copy and IO pause/resume ([5647](https://github.com/longhorn/longhorn/issues/5647)) - @DamiaSan
|
||||
- [FEATURE] Introduce a new disk type, disk management and replica scheduler for SPDK volumes ([5683](https://github.com/longhorn/longhorn/issues/5683)) - @derekbit @roger-ryao
|
||||
- [FEATURE] Support replica scheduling for SPDK volume ([5711](https://github.com/longhorn/longhorn/issues/5711)) - @derekbit
|
||||
- [FEATURE] Create SPDK gRPC service for instance manager ([5712](https://github.com/longhorn/longhorn/issues/5712)) - @shuo-wu
|
||||
- [FEATURE] Environment check script for Longhorn with SPDK ([5738](https://github.com/longhorn/longhorn/issues/5738)) - @derekbit @chriscchien
|
||||
- [FEATURE] Deployment manifests for helping install SPDK dependencies, utilities and libraries ([5739](https://github.com/longhorn/longhorn/issues/5739)) - @yangchiu @derekbit
|
||||
- [FEATURE] Implement Disk gRPC Service in Instance Manager for collecting SPDK disk statistics from SPDK gRPC service ([5744](https://github.com/longhorn/longhorn/issues/5744)) - @derekbit @chriscchien
|
||||
- [FEATURE] Support for SPDK RAID1 by setting the minimum number of base_bdevs to 1 ([5758](https://github.com/longhorn/longhorn/issues/5758)) - @yangchiu @DamiaSan
|
||||
- [FEATURE] Add a global setting for enabling and disabling SPDK feature ([5778](https://github.com/longhorn/longhorn/issues/5778)) - @yangchiu @derekbit
|
||||
- [FEATURE] Identify and manage orphaned lvols and raid bdevs if the associated `Volume` resources are not existing ([5827](https://github.com/longhorn/longhorn/issues/5827)) - @yangchiu @derekbit
|
||||
- [FEATURE] Longhorn UI for SPDK feature ([5846](https://github.com/longhorn/longhorn/issues/5846)) - @smallteeths @chriscchien
|
||||
- [FEATURE] UI modification to work with new AD mechanism (Longhorn UI -> Longhorn API) ([6004](https://github.com/longhorn/longhorn/issues/6004)) - @yangchiu @smallteeths
|
||||
- [FEATURE] Replica offline rebuild over SPDK - data engine ([6067](https://github.com/longhorn/longhorn/issues/6067)) - @shuo-wu
|
||||
- [FEATURE] Support automatic offline replica rebuilding of volumes using SPDK data engine ([6071](https://github.com/longhorn/longhorn/issues/6071)) - @yangchiu @derekbit
|
||||
|
||||
## Improvement
|
||||
|
||||
- [IMPROVEMENT] Do not count the failure replica reuse failure caused by the disconnection ([1923](https://github.com/longhorn/longhorn/issues/1923)) - @yangchiu @mantissahz
|
||||
- [IMPROVEMENT] Consider changing the over provisioning default/recommendation to 100% percentage (no over provisioning) ([2694](https://github.com/longhorn/longhorn/issues/2694)) - @c3y1huang @chriscchien
|
||||
- [BUG] StorageClass of pv and pvc of a recovered pv should not always be default. ([3506](https://github.com/longhorn/longhorn/issues/3506)) - @ChanYiLin @smallteeths @roger-ryao
|
||||
- [IMPROVEMENT] Auto-attach volume for K8s CSI snapshot ([3726](https://github.com/longhorn/longhorn/issues/3726)) - @weizhe0422 @PhanLe1010
|
||||
- [IMPROVEMENT] Change Longhorn API to create/delete snapshot CRs instead of calling engine CLI ([3995](https://github.com/longhorn/longhorn/issues/3995)) - @yangchiu @PhanLe1010
|
||||
- [IMPROVEMENT] Add support for crypto parameters for RWX volumes ([4829](https://github.com/longhorn/longhorn/issues/4829)) - @mantissahz @roger-ryao
|
||||
- [IMPROVEMENT] Remove the global setting `mkfs-ext4-parameters` ([4914](https://github.com/longhorn/longhorn/issues/4914)) - @ejweber @roger-ryao
|
||||
- [IMPROVEMENT] Move all snapshot related settings at one place. ([4930](https://github.com/longhorn/longhorn/issues/4930)) - @smallteeths @roger-ryao
|
||||
- [IMPROVEMENT] Remove system managed component image settings ([5028](https://github.com/longhorn/longhorn/issues/5028)) - @mantissahz @chriscchien
|
||||
- [IMPROVEMENT] Set default `engine-replica-timeout` value for engine controller start command ([5031](https://github.com/longhorn/longhorn/issues/5031)) - @derekbit @chriscchien
|
||||
- [IMPROVEMENT] Support bundle collects dmesg, syslog and related information of longhorn nodes ([5073](https://github.com/longhorn/longhorn/issues/5073)) - @weizhe0422 @roger-ryao
|
||||
- [IMPROVEMENT] Collect volume, system, feature info for metrics for better usage awareness ([5235](https://github.com/longhorn/longhorn/issues/5235)) - @c3y1huang @chriscchien @roger-ryao
|
||||
- [IMPROVEMENT] Update uninstallation info to include the 'Deleting Confirmation Flag' in chart ([5250](https://github.com/longhorn/longhorn/issues/5250)) - @PhanLe1010 @roger-ryao
|
||||
- [IMPROVEMENT] Disable Revision Counter for Strict-Local dataLocality ([5257](https://github.com/longhorn/longhorn/issues/5257)) - @derekbit @roger-ryao
|
||||
- [IMPROVEMENT] Fix Guaranteed Engine Manager CPU recommendation formula in UI ([5338](https://github.com/longhorn/longhorn/issues/5338)) - @c3y1huang @smallteeths @roger-ryao
|
||||
- [IMPROVEMENT] Update PSP validation in the Longhorn upstream chart ([5339](https://github.com/longhorn/longhorn/issues/5339)) - @yangchiu @PhanLe1010
|
||||
- [IMPROVEMENT] Update ganesha nfs to 4.2.3 ([5356](https://github.com/longhorn/longhorn/issues/5356)) - @derekbit @roger-ryao
|
||||
- [IMPROVEMENT] Set write-cache of longhorn block device to off explicitly ([5382](https://github.com/longhorn/longhorn/issues/5382)) - @derekbit @chriscchien
|
||||
- [IMPROVEMENT] Clean up unused backupstore mountpoint ([5391](https://github.com/longhorn/longhorn/issues/5391)) - @derekbit @chriscchien
|
||||
- [DOC] Update Kubernetes version info to have consistent description from the longhorn documentation in chart ([5399](https://github.com/longhorn/longhorn/issues/5399)) - @ChanYiLin @roger-ryao
|
||||
- [IMPROVEMENT] Fix BackingImage uploading/downloading flow to prevent client timeout ([5443](https://github.com/longhorn/longhorn/issues/5443)) - @ChanYiLin @chriscchien
|
||||
- [IMPROVEMENT] Assign the pods to the same node where the strict-local volume is present ([5448](https://github.com/longhorn/longhorn/issues/5448)) - @c3y1huang @chriscchien
|
||||
- [IMPROVEMENT] Have explicitly message when trying to attach a volume which it's engine and replica were on deleted node ([5545](https://github.com/longhorn/longhorn/issues/5545)) - @ChanYiLin @chriscchien
|
||||
- [IMPROVEMENT] Create a new setting so that Longhorn removes PDB for instance-manager-r that doesn't have any running instance inside it ([5549](https://github.com/longhorn/longhorn/issues/5549)) - @PhanLe1010 @roger-ryao
|
||||
- [IMPROVEMENT] Merge conversion/admission webhook and recovery backend services into longhorn-manager ([5590](https://github.com/longhorn/longhorn/issues/5590)) - @ChanYiLin @chriscchien
|
||||
- [IMPROVEMENT][UI] Recurring jobs create new snapshots while being not able to clean up old one ([5610](https://github.com/longhorn/longhorn/issues/5610)) - @mantissahz @smallteeths @roger-ryao
|
||||
- [IMPROVEMENT] Only activate replica if it doesn't have deletion timestamp during volume engine upgrade ([5632](https://github.com/longhorn/longhorn/issues/5632)) - @PhanLe1010 @roger-ryao
|
||||
- [IMPROVEMENT] Clean up backup target if the backup target setting is unset ([5655](https://github.com/longhorn/longhorn/issues/5655)) - @yangchiu @ChanYiLin
|
||||
- [IMPROVEMENT] Bump CSI sidecar components' version ([5672](https://github.com/longhorn/longhorn/issues/5672)) - @yangchiu @ejweber
|
||||
- [IMPROVEMENT] Configure log level of Longhorn components ([5888](https://github.com/longhorn/longhorn/issues/5888)) - @ChanYiLin @weizhe0422
|
||||
- [IMPROVEMENT] Remove development toolchain from Longhorn images ([6022](https://github.com/longhorn/longhorn/issues/6022)) - @ChanYiLin @derekbit
|
||||
- [IMPROVEMENT] Reduce replica process's number of allocated ports ([6079](https://github.com/longhorn/longhorn/issues/6079)) - @ChanYiLin @derekbit
|
||||
- [IMPROVEMENT] UI supports automatic replica rebuilding for SPDK volumes ([6107](https://github.com/longhorn/longhorn/issues/6107)) - @smallteeths @roger-ryao
|
||||
- [IMPROVEMENT] Minor UX changes for Longhorn SPDK ([6126](https://github.com/longhorn/longhorn/issues/6126)) - @derekbit @roger-ryao
|
||||
- [IMPROVEMENT] Instance manager spdk_tgt resilience due to spdk_tgt crash ([6155](https://github.com/longhorn/longhorn/issues/6155)) - @yangchiu @derekbit
|
||||
- [IMPROVEMENT] Determine number of replica/engine port count in longhorn-manager (control plane) instead ([6163](https://github.com/longhorn/longhorn/issues/6163)) - @derekbit @chriscchien
|
||||
- [IMPROVEMENT] SPDK client should functions after encountering decoding error ([6191](https://github.com/longhorn/longhorn/issues/6191)) - @yangchiu @shuo-wu
|
||||
|
||||
## Performance
|
||||
|
||||
- [REFACTORING] Evaluate the impact of removing the client side compression for backup blocks ([1409](https://github.com/longhorn/longhorn/issues/1409)) - @derekbit
|
||||
|
||||
## Resilience
|
||||
|
||||
- [BUG] If backing image downloading fails on one node, it doesn't try on other nodes. ([3746](https://github.com/longhorn/longhorn/issues/3746)) - @ChanYiLin
|
||||
- [BUG] Replica rebuilding caused by rke2/kubelet restart ([5340](https://github.com/longhorn/longhorn/issues/5340)) - @derekbit @chriscchien
|
||||
- [BUG] Volume restoration will never complete if attached node is down ([5464](https://github.com/longhorn/longhorn/issues/5464)) - @derekbit @weizhe0422 @chriscchien
|
||||
- [BUG] Node disconnection test failed ([5476](https://github.com/longhorn/longhorn/issues/5476)) - @yangchiu @derekbit
|
||||
- [BUG] Physical node down test failed ([5477](https://github.com/longhorn/longhorn/issues/5477)) - @derekbit @chriscchien
|
||||
- [BUG] Backing image with sync failure ([5481](https://github.com/longhorn/longhorn/issues/5481)) - @ChanYiLin @roger-ryao
|
||||
- [BUG] share-manager pod failed to restart after kubelet restart ([5507](https://github.com/longhorn/longhorn/issues/5507)) - @yangchiu @derekbit
|
||||
- [BUG] Directly mark replica as failed if the node is deleted ([5542](https://github.com/longhorn/longhorn/issues/5542)) - @weizhe0422 @roger-ryao
|
||||
- [BUG] RWX volume is stuck at detaching when the attached node is down ([5558](https://github.com/longhorn/longhorn/issues/5558)) - @derekbit @roger-ryao
|
||||
- [BUG] Unable to export RAID1 bdev in degraded state ([5650](https://github.com/longhorn/longhorn/issues/5650)) - @chriscchien @DamiaSan
|
||||
- [BUG] Backup monitor gets stuck in an infinite loop if backup isn't found ([5662](https://github.com/longhorn/longhorn/issues/5662)) - @derekbit @chriscchien
|
||||
- [BUG] Resources such as replicas are somehow not mutated when network is unstable ([5762](https://github.com/longhorn/longhorn/issues/5762)) - @derekbit @roger-ryao
|
||||
- [BUG] filesystem corrupted after delete instance-manager-r for a locality best-effort volume ([5801](https://github.com/longhorn/longhorn/issues/5801)) - @yangchiu @ChanYiLin @mantissahz
|
||||
|
||||
## Stability
|
||||
|
||||
- [BUG] nfs backup broken - NFS server: mkdir - file exists ([4626](https://github.com/longhorn/longhorn/issues/4626)) - @yangchiu @derekbit
|
||||
- [BUG] Memory leak in CSI plugin caused by stuck umount processes if the RWX volume is already gone ([5296](https://github.com/longhorn/longhorn/issues/5296)) - @derekbit @roger-ryao
|
||||
|
||||
## Bugs
|
||||
|
||||
- [BUG] 'Upgrade Engine' still shows up in a specific situation when engine already upgraded ([3063](https://github.com/longhorn/longhorn/issues/3063)) - @weizhe0422 @PhanLe1010 @smallteeths
|
||||
- [BUG] DR volume even after activation remains in standby mode if there are one or more failed replicas. ([3069](https://github.com/longhorn/longhorn/issues/3069)) - @yangchiu @mantissahz
|
||||
- [BUG] volume not able to attach with raw type backing image ([3437](https://github.com/longhorn/longhorn/issues/3437)) - @yangchiu @ChanYiLin
|
||||
- [BUG] Delete a uploading backing image, the corresponding LH temp file is not deleted ([3682](https://github.com/longhorn/longhorn/issues/3682)) - @ChanYiLin @chriscchien
|
||||
- [BUG] Cloned PVC from detached volume will stuck at not ready for workload ([3692](https://github.com/longhorn/longhorn/issues/3692)) - @PhanLe1010 @chriscchien
|
||||
- [BUG] Block device volume failed to unmount when it is detached unexpectedly ([3778](https://github.com/longhorn/longhorn/issues/3778)) - @PhanLe1010 @chriscchien
|
||||
- [BUG] After migration of Longhorn from Rancher old UI to dashboard, the csi-plugin doesn't update ([4519](https://github.com/longhorn/longhorn/issues/4519)) - @mantissahz @roger-ryao
|
||||
- [BUG] Volumes Stuck in Attach/Detach Loop when running on OpenShift/OKD ([4988](https://github.com/longhorn/longhorn/issues/4988)) - @ChanYiLin
|
||||
- [BUG] Longhorn 1.3.2 fails to backup & restore volumes behind Internet proxy ([5054](https://github.com/longhorn/longhorn/issues/5054)) - @mantissahz @chriscchien
|
||||
- [BUG] Instance manager pod does not respect of node taint? ([5161](https://github.com/longhorn/longhorn/issues/5161)) - @ejweber
|
||||
- [BUG] RWX doesn't work with release 1.4.0 due to end grace update error from recovery backend ([5183](https://github.com/longhorn/longhorn/issues/5183)) - @derekbit @chriscchien
|
||||
- [BUG] Incorrect indentation of charts/questions.yaml ([5196](https://github.com/longhorn/longhorn/issues/5196)) - @mantissahz @roger-ryao
|
||||
- [BUG] Updating option "Allow snapshots removal during trim" for old volumes failed ([5218](https://github.com/longhorn/longhorn/issues/5218)) - @shuo-wu @roger-ryao
|
||||
- [BUG] Since 1.4.0 RWX volume failing regularly ([5224](https://github.com/longhorn/longhorn/issues/5224)) - @derekbit
|
||||
- [BUG] Can not create backup in engine image not fully deployed cluster ([5248](https://github.com/longhorn/longhorn/issues/5248)) - @ChanYiLin @roger-ryao
|
||||
- [BUG] Incorrect router retry mechanism ([5259](https://github.com/longhorn/longhorn/issues/5259)) - @mantissahz @chriscchien
|
||||
- [BUG] System Backup is stuck at Uploading if there are PVs not provisioned by CSI driver ([5286](https://github.com/longhorn/longhorn/issues/5286)) - @c3y1huang @chriscchien
|
||||
- [BUG] Sync up with backup target during DR volume activation ([5292](https://github.com/longhorn/longhorn/issues/5292)) - @yangchiu @weizhe0422
|
||||
- [BUG] environment_check.sh does not handle different kernel versions in cluster correctly ([5304](https://github.com/longhorn/longhorn/issues/5304)) - @achims311 @roger-ryao
|
||||
- [BUG] instance-manager-r high memory consumption ([5312](https://github.com/longhorn/longhorn/issues/5312)) - @derekbit @roger-ryao
|
||||
- [BUG] Unable to upgrade longhorn from v1.3.2 to master-head ([5368](https://github.com/longhorn/longhorn/issues/5368)) - @yangchiu @derekbit
|
||||
- [BUG] Modify engineManagerCPURequest and replicaManagerCPURequest won't raise resource request in instance-manager-e pod ([5419](https://github.com/longhorn/longhorn/issues/5419)) - @c3y1huang
|
||||
- [BUG] Error message not consistent between create/update recurring job when retain number greater than 50 ([5434](https://github.com/longhorn/longhorn/issues/5434)) - @c3y1huang @chriscchien
|
||||
- [BUG] Do not copy Host header to API requests forwarded to Longhorn Manager ([5438](https://github.com/longhorn/longhorn/issues/5438)) - @yangchiu @smallteeths
|
||||
- [BUG] RWX Volume attachment is getting Failed ([5456](https://github.com/longhorn/longhorn/issues/5456)) - @derekbit
|
||||
- [BUG] test case test_backup_lock_deletion_during_restoration failed ([5458](https://github.com/longhorn/longhorn/issues/5458)) - @yangchiu @derekbit
|
||||
- [BUG] Unable to create support bundle agent pod in air-gap environment ([5467](https://github.com/longhorn/longhorn/issues/5467)) - @yangchiu @c3y1huang
|
||||
- [BUG] Example of data migration doesn't work for hidden/./dot-files) ([5484](https://github.com/longhorn/longhorn/issues/5484)) - @hedefalk @shuo-wu @chriscchien
|
||||
- [BUG] Upgrade engine --> spec.restoreVolumeRecurringJob and spec.snapshotDataIntegrity Unsupported value ([5485](https://github.com/longhorn/longhorn/issues/5485)) - @yangchiu @derekbit
|
||||
- [BUG] test case test_dr_volume_with_backup_block_deletion failed ([5489](https://github.com/longhorn/longhorn/issues/5489)) - @yangchiu @derekbit
|
||||
- [BUG] Bulk backup deletion cause restoring volume to finish with attached state. ([5506](https://github.com/longhorn/longhorn/issues/5506)) - @ChanYiLin @roger-ryao
|
||||
- [BUG] volume expansion starts for no reason, gets stuck on current size > expected size ([5513](https://github.com/longhorn/longhorn/issues/5513)) - @mantissahz @roger-ryao
|
||||
- [BUG] RWX volume attachment failed if tried more enough times ([5537](https://github.com/longhorn/longhorn/issues/5537)) - @yangchiu @derekbit
|
||||
- [BUG] instance-manager-e emits `Wait for process pvc-xxxx to shutdown` constantly ([5575](https://github.com/longhorn/longhorn/issues/5575)) - @derekbit @roger-ryao
|
||||
- [BUG] Support bundle kit should respect node selector & taint toleration ([5614](https://github.com/longhorn/longhorn/issues/5614)) - @yangchiu @c3y1huang
|
||||
- [BUG] Value overlapped in page Instance Manager Image ([5622](https://github.com/longhorn/longhorn/issues/5622)) - @smallteeths @chriscchien
|
||||
- [BUG] Updated Rocky 9 (and others) can't attach due to SELinux ([5627](https://github.com/longhorn/longhorn/issues/5627)) - @yangchiu @ejweber
|
||||
- [BUG] Fix misleading error messages when creating a mount point for a backup store ([5630](https://github.com/longhorn/longhorn/issues/5630)) - @derekbit
|
||||
- [BUG] Instance manager PDB created with wrong selector thus blocking the draining of the wrongly selected node forever ([5680](https://github.com/longhorn/longhorn/issues/5680)) - @PhanLe1010 @chriscchien
|
||||
- [BUG] During volume live engine upgrade, if the replica pod is killed, the volume is stuck in upgrading forever ([5684](https://github.com/longhorn/longhorn/issues/5684)) - @yangchiu @PhanLe1010
|
||||
- [BUG] Instance manager PDBs cannot be removed if the longhorn-manager pod on its spec node is not available ([5688](https://github.com/longhorn/longhorn/issues/5688)) - @PhanLe1010 @roger-ryao
|
||||
- [BUG] Rebuild rebuilding is possibly issued to a wrong replica ([5709](https://github.com/longhorn/longhorn/issues/5709)) - @ejweber @roger-ryao
|
||||
- [BUG] Observing repilca on new IM-r before upgrading of volume ([5729](https://github.com/longhorn/longhorn/issues/5729)) - @c3y1huang
|
||||
- [BUG] longhorn upgrade is not upgrading engineimage ([5740](https://github.com/longhorn/longhorn/issues/5740)) - @shuo-wu @chriscchien
|
||||
- [BUG] `test_replica_auto_balance_when_replica_on_unschedulable_node` Error in creating volume with nodeSelector and dataLocality parameters ([5745](https://github.com/longhorn/longhorn/issues/5745)) - @c3y1huang @roger-ryao
|
||||
- [BUG] Unable to backup volume after NFS server IP change ([5856](https://github.com/longhorn/longhorn/issues/5856)) - @derekbit @roger-ryao
|
||||
- [BUG] Prevent Longhorn uninstallation from getting stuck due to backups in error ([5868](https://github.com/longhorn/longhorn/issues/5868)) - @ChanYiLin @mantissahz
|
||||
- [BUG] Unable to create support bundle if the previous one stayed in ReadyForDownload phase ([5882](https://github.com/longhorn/longhorn/issues/5882)) - @c3y1huang @roger-ryao
|
||||
- [BUG] share-manager for a given pvc keep restarting (other pvc are working fine) ([5954](https://github.com/longhorn/longhorn/issues/5954)) - @yangchiu @derekbit
|
||||
- [BUG] Replica auto-rebalance doesn't respect node selector ([5971](https://github.com/longhorn/longhorn/issues/5971)) - @c3y1huang @roger-ryao
|
||||
- [BUG] Volume detached automatically after upgrade Longhorn ([5983](https://github.com/longhorn/longhorn/issues/5983)) - @yangchiu @PhanLe1010
|
||||
- [BUG] Extra snapshot generated when clone from a detached volume ([5986](https://github.com/longhorn/longhorn/issues/5986)) - @weizhe0422 @ejweber
|
||||
- [BUG] User created snapshot deleted after node drain and uncordon ([5992](https://github.com/longhorn/longhorn/issues/5992)) - @yangchiu @mantissahz
|
||||
- [BUG] Webhook PDBs are not removed after upgrading to master-head ([6026](https://github.com/longhorn/longhorn/issues/6026)) - @weizhe0422 @PhanLe1010
|
||||
- [BUG] In some specific situation, system backup auto deleted when creating another one ([6045](https://github.com/longhorn/longhorn/issues/6045)) - @c3y1huang @chriscchien
|
||||
- [BUG] Backing Image deletion stuck if it's deleted during uploading process and bids is ready-for-transfer state ([6086](https://github.com/longhorn/longhorn/issues/6086)) - @WebberHuang1118 @chriscchien
|
||||
- [BUG] A backup target backed by a Samba server is not recognized ([6100](https://github.com/longhorn/longhorn/issues/6100)) - @derekbit @weizhe0422
|
||||
- [BUG] Backing image manager fails when SELinux is enabled ([6108](https://github.com/longhorn/longhorn/issues/6108)) - @ejweber @chriscchien
|
||||
- [BUG] Force delete volume make SPDK disk unschedule ([6110](https://github.com/longhorn/longhorn/issues/6110)) - @derekbit
|
||||
- [BUG] share-manager terminated during Longhorn upgrading causes rwx volume not working ([6120](https://github.com/longhorn/longhorn/issues/6120)) - @yangchiu @derekbit
|
||||
- [BUG] SPDK Volume snapshotList API Error ([6123](https://github.com/longhorn/longhorn/issues/6123)) - @derekbit @chriscchien
|
||||
- [BUG] test_recurring_jobs_allow_detached_volume failed ([6124](https://github.com/longhorn/longhorn/issues/6124)) - @ChanYiLin @roger-ryao
|
||||
- [BUG] Cron job triggered replica rebuilding keeps repeating itself after corrupting snapshot data ([6129](https://github.com/longhorn/longhorn/issues/6129)) - @yangchiu @mantissahz
|
||||
- [BUG] test_dr_volume_with_restore_command_error failed ([6130](https://github.com/longhorn/longhorn/issues/6130)) - @mantissahz @roger-ryao
|
||||
- [BUG] RWX volume remains attached after workload deleted if it's upgraded from v1.4.2 ([6139](https://github.com/longhorn/longhorn/issues/6139)) - @PhanLe1010 @chriscchien
|
||||
- [BUG] timestamp or checksum not matched in test_snapshot_hash_detect_corruption test case ([6145](https://github.com/longhorn/longhorn/issues/6145)) - @yangchiu @derekbit
|
||||
- [BUG] When a v2 volume is attached in maintenance mode, removing a replica will lead to volume stuck in attaching-detaching loop ([6166](https://github.com/longhorn/longhorn/issues/6166)) - @derekbit @chriscchien
|
||||
- [BUG] Misleading offline rebuilding hint if offline rebuilding is not enabled ([6169](https://github.com/longhorn/longhorn/issues/6169)) - @smallteeths @roger-ryao
|
||||
- [BUG] Longhorn doesn't remove the system backups crd on uninstallation ([6185](https://github.com/longhorn/longhorn/issues/6185)) - @c3y1huang @khushboo-rancher
|
||||
- [BUG] Volume attachment related error logs in uninstaller pod ([6197](https://github.com/longhorn/longhorn/issues/6197)) - @yangchiu @PhanLe1010
|
||||
- [BUG] Test case test_ha_backup_deletion_recovery failed in rhel or rockylinux arm64 environment ([6213](https://github.com/longhorn/longhorn/issues/6213)) - @yangchiu @ChanYiLin @mantissahz
|
||||
- [BUG] migration test cases could fail due to unexpected volume controllers and replicas status ([6215](https://github.com/longhorn/longhorn/issues/6215)) - @yangchiu @PhanLe1010
|
||||
- [BUG] Engine continues to attempt to rebuild replica while detaching ([6217](https://github.com/longhorn/longhorn/issues/6217)) - @yangchiu @ejweber
|
||||
|
||||
## Misc
|
||||
|
||||
- [TASK] Remove deprecated volume spec recurringJobs and storageClass recurringJobs field ([2865](https://github.com/longhorn/longhorn/issues/2865)) - @c3y1huang @chriscchien
|
||||
- [TASK] Remove deprecated fields after CRD API version bump ([3289](https://github.com/longhorn/longhorn/issues/3289)) - @c3y1huang @roger-ryao
|
||||
- [TASK] Replace jobq lib with an alternative way for listing remote backup volumes and info ([4176](https://github.com/longhorn/longhorn/issues/4176)) - @ChanYiLin @chriscchien
|
||||
- [DOC] Update the Longhorn document in Uninstalling Longhorn using kubectl ([4841](https://github.com/longhorn/longhorn/issues/4841)) - @roger-ryao
|
||||
- [TASK] Remove a deprecated feature `disable-replica-rebuild` from longhorn-manager ([4997](https://github.com/longhorn/longhorn/issues/4997)) - @ejweber @chriscchien
|
||||
- [TASK] Update the distro matrix supports on Longhorn docs for 1.5 ([5177](https://github.com/longhorn/longhorn/issues/5177)) - @yangchiu
|
||||
- [TASK] Clarify if any upcoming K8s API deprecation/removal will impact Longhorn 1.4 ([5180](https://github.com/longhorn/longhorn/issues/5180)) - @PhanLe1010
|
||||
- [TASK] Revert affinity for Longhorn user deployed components ([5191](https://github.com/longhorn/longhorn/issues/5191)) - @weizhe0422 @ejweber
|
||||
- [TASK] Add GitHub action for CI to lib repos for supporting dependency bot ([5239](https://github.com/longhorn/longhorn/issues/5239)) -
|
||||
- [DOC] Update the readme of longhorn-spdk-engine about using new Longhorn (RAID1) bdev ([5256](https://github.com/longhorn/longhorn/issues/5256)) - @DamiaSan
|
||||
- [TASK][UI] add new recurring job tasks ([5272](https://github.com/longhorn/longhorn/issues/5272)) - @smallteeths @chriscchien
|
||||
- [DOC] Update the node maintenance doc to cover upgrade prerequisites for Rancher ([5278](https://github.com/longhorn/longhorn/issues/5278)) - @PhanLe1010
|
||||
- [TASK] Run build-engine-test-images automatically when having incompatible engine on master ([5400](https://github.com/longhorn/longhorn/issues/5400)) - @yangchiu
|
||||
- [TASK] Update k8s.gcr.io to registry.k8s.io in repos ([5432](https://github.com/longhorn/longhorn/issues/5432)) - @yangchiu
|
||||
- [TASK][UI] add new recurring job task - filesystem trim ([5529](https://github.com/longhorn/longhorn/issues/5529)) - @smallteeths @chriscchien
|
||||
- doc: update prerequisites in chart readme to make it consistent with documentation v1.3.x ([5531](https://github.com/longhorn/longhorn/pull/5531)) - @ChanYiLin
|
||||
- [FEATURE] Remove deprecated `allow-node-drain-with-last-healthy-replica` ([5620](https://github.com/longhorn/longhorn/issues/5620)) - @weizhe0422 @PhanLe1010
|
||||
- [FEATURE] Set recurring jobs to PVCs ([5791](https://github.com/longhorn/longhorn/issues/5791)) - @yangchiu @c3y1huang
|
||||
- [TASK] Automatically update crds.yaml in longhorn repo from longhorn-manager repo ([5854](https://github.com/longhorn/longhorn/issues/5854)) - @yangchiu
|
||||
- [IMPROVEMENT] Remove privilege requirement from lifecycle jobs ([5862](https://github.com/longhorn/longhorn/issues/5862)) - @mantissahz @chriscchien
|
||||
- [TASK][UI] support new aio typed instance managers ([5876](https://github.com/longhorn/longhorn/issues/5876)) - @smallteeths @chriscchien
|
||||
- [TASK] Remove `Guaranteed Engine Manager CPU`, `Guaranteed Replica Manager CPU`, and `Guaranteed Engine CPU` settings. ([5917](https://github.com/longhorn/longhorn/issues/5917)) - @c3y1huang @roger-ryao
|
||||
- [TASK][UI] Support volume backup policy ([6028](https://github.com/longhorn/longhorn/issues/6028)) - @smallteeths @chriscchien
|
||||
- [TASK] Reduce BackupConcurrentLimit and RestoreConcurrentLimit default values ([6135](https://github.com/longhorn/longhorn/issues/6135)) - @derekbit @chriscchien
|
||||
|
||||
## Contributors
|
||||
|
||||
- @ChanYiLin
|
||||
- @DamiaSan
|
||||
- @PhanLe1010
|
||||
- @WebberHuang1118
|
||||
- @achims311
|
||||
- @c3y1huang
|
||||
- @chriscchien
|
||||
- @derekbit
|
||||
- @ejweber
|
||||
- @hedefalk
|
||||
- @innobead
|
||||
- @khushboo-rancher
|
||||
- @mantissahz
|
||||
- @roger-ryao
|
||||
- @shuo-wu
|
||||
- @smallteeths
|
||||
- @weizhe0422
|
||||
- @yangchiu
|
65
CHANGELOG/CHANGELOG-1.5.1.md
Normal file
65
CHANGELOG/CHANGELOG-1.5.1.md
Normal file
@ -0,0 +1,65 @@
|
||||
## Release Note
|
||||
### **v1.5.1 released!** 🎆
|
||||
|
||||
Longhorn v1.5.1 is the latest version of Longhorn 1.5.
|
||||
This release introduces bug fixes as described below about 1.5.0 upgrade issues, stability, troubleshooting and so on. Please try it and feedback. Thanks for all the contributions!
|
||||
|
||||
> For the definition of stable or latest release, please check [here](https://github.com/longhorn/longhorn#releases).
|
||||
|
||||
## Installation
|
||||
|
||||
> **Please ensure your Kubernetes cluster is at least v1.21 before installing v1.5.1.**
|
||||
|
||||
Longhorn supports 3 installation ways including Rancher App Marketplace, Kubectl, and Helm. Follow the installation instructions [here](https://longhorn.io/docs/1.5.1/deploy/install/).
|
||||
|
||||
## Upgrade
|
||||
|
||||
> **Please read the [important notes](https://longhorn.io/docs/1.5.1/deploy/important-notes/) first and ensure your Kubernetes cluster is at least v1.21 before upgrading to Longhorn v1.5.1 from v1.4.x/v1.5.0, which are only supported source versions.**
|
||||
|
||||
Follow the upgrade instructions [here](https://longhorn.io/docs/1.5.1/deploy/upgrade/).
|
||||
|
||||
## Deprecation & Incompatibilities
|
||||
|
||||
N/A
|
||||
|
||||
## Known Issues after Release
|
||||
|
||||
Please follow up on [here](https://github.com/longhorn/longhorn/wiki/Outstanding-Known-Issues-of-Releases) about any outstanding issues found after this release.
|
||||
|
||||
## Improvement
|
||||
|
||||
- [IMPROVEMENT] Implement/fix the unit tests of Volume Attachment and volume controller ([6005](https://github.com/longhorn/longhorn/issues/6005)) - @PhanLe1010
|
||||
- [QUESTION] Repetitive warnings and errors in a new longhorn setup ([6257](https://github.com/longhorn/longhorn/issues/6257)) - @derekbit @c3y1huang @roger-ryao
|
||||
|
||||
## Resilience
|
||||
|
||||
- [BUG] 1.5.0 Upgrade: Longhorn conversion webhook server fails ([6259](https://github.com/longhorn/longhorn/issues/6259)) - @derekbit @roger-ryao
|
||||
- [BUG] Race leaves snapshot CRs that cannot be deleted ([6298](https://github.com/longhorn/longhorn/issues/6298)) - @yangchiu @PhanLe1010 @ejweber
|
||||
|
||||
## Bugs
|
||||
|
||||
- [BUG] Engine continues to attempt to rebuild replica while detaching ([6217](https://github.com/longhorn/longhorn/issues/6217)) - @yangchiu @ejweber
|
||||
- [BUG] Upgrade to 1.5.0 failed: validator.longhorn.io denied the request if having orphan resources ([6246](https://github.com/longhorn/longhorn/issues/6246)) - @derekbit @roger-ryao
|
||||
- [BUG] Unable to receive support bundle from UI when it's large (400MB+) ([6256](https://github.com/longhorn/longhorn/issues/6256)) - @c3y1huang @chriscchien
|
||||
- [BUG] Longhorn Manager Pods CrashLoop after upgrade from 1.4.0 to 1.5.0 while backing up volumes ([6264](https://github.com/longhorn/longhorn/issues/6264)) - @ChanYiLin @roger-ryao
|
||||
- [BUG] Can not delete type=`bi` VolumeSnapshot if related backing image not exist ([6266](https://github.com/longhorn/longhorn/issues/6266)) - @ChanYiLin @chriscchien
|
||||
- [BUG] 1.5.0: AttachVolume.Attach failed for volume, the volume is currently attached to a different node ([6287](https://github.com/longhorn/longhorn/issues/6287)) - @yangchiu @derekbit
|
||||
- [BUG] test case test_setting_priority_class failed in master and v1.5.x ([6319](https://github.com/longhorn/longhorn/issues/6319)) - @derekbit @chriscchien
|
||||
- [BUG] Unused webhook and recovery backend deployment left in helm chart ([6252](https://github.com/longhorn/longhorn/issues/6252)) - @ChanYiLin @chriscchien
|
||||
|
||||
## Misc
|
||||
|
||||
- [DOC] v1.5.0 additional outgoing firewall ports need to be opened 9501 9502 9503 ([6317](https://github.com/longhorn/longhorn/issues/6317)) - @ChanYiLin @chriscchien
|
||||
|
||||
## Contributors
|
||||
|
||||
- @ChanYiLin
|
||||
- @PhanLe1010
|
||||
- @c3y1huang
|
||||
- @chriscchien
|
||||
- @derekbit
|
||||
- @ejweber
|
||||
- @innobead
|
||||
- @roger-ryao
|
||||
- @yangchiu
|
||||
|
@ -3,5 +3,6 @@ The list of current Longhorn maintainers:
|
||||
Name, <Email>, @GitHubHandle
|
||||
Sheng Yang, <sheng@yasker.org>, @yasker
|
||||
Shuo Wu, <shuo.wu@suse.com>, @shuo-wu
|
||||
Joshua Moody, <joshua.moody@suse.com>, @joshimoo
|
||||
David Ko, <dko@suse.com>, @innobead
|
||||
Derek Su, <derek.su@suse.com>, @derekbit
|
||||
Phan Le, <phan.le@suse.com>, @PhanLe1010
|
||||
|
112
README.md
112
README.md
@ -1,8 +1,20 @@
|
||||
# Longhorn
|
||||
<h1 align="center" style="border-bottom: none">
|
||||
<a href="https://longhorn.io/" target="_blank"><img alt="Longhorn" width="120px" src="https://github.com/longhorn/website/blob/master/static/img/icon-longhorn.svg"></a><br>Longhorn
|
||||
</h1>
|
||||
|
||||
Longhorn is a distributed block storage system for Kubernetes. Longhorn is cloud native storage because it is built using Kubernetes and container primitives.
|
||||
<p align="center">A CNCF Incubating Project. Visit <a href="https://longhorn.io/" target="_blank">longhorn.io</a> for the full documentation.</p>
|
||||
|
||||
Longhorn is lightweight, reliable, and powerful. You can install Longhorn on an existing Kubernetes cluster with one `kubectl apply` command or using Helm charts. Once Longhorn is installed, it adds persistent volume support to the Kubernetes cluster.
|
||||
<div align="center">
|
||||
|
||||
[](https://github.com/longhorn/longhorn/releases)
|
||||
[](https://github.com/longhorn/longhorn/blob/master/LICENSE)
|
||||
[](https://longhorn.io/docs/latest/)
|
||||
|
||||
</div>
|
||||
|
||||
Longhorn is a distributed block storage system for Kubernetes. Longhorn is cloud-native storage built using Kubernetes and container primitives.
|
||||
|
||||
Longhorn is lightweight, reliable, and powerful. You can install Longhorn on an existing Kubernetes cluster with one `kubectl apply`command or by using Helm charts. Once Longhorn is installed, it adds persistent volume support to the Kubernetes cluster.
|
||||
|
||||
Longhorn implements distributed block storage using containers and microservices. Longhorn creates a dedicated storage controller for each block device volume and synchronously replicates the volume across multiple replicas stored on multiple nodes. The storage controller and replicas are themselves orchestrated using Kubernetes. Here are some notable features of Longhorn:
|
||||
|
||||
@ -15,39 +27,37 @@ Longhorn implements distributed block storage using containers and microservices
|
||||
|
||||
You can read more technical details of Longhorn [here](https://longhorn.io/).
|
||||
|
||||
## Current Status
|
||||
# Releases
|
||||
|
||||
The latest release of Longhorn is [](https://github.com/longhorn/longhorn/releases)
|
||||
> **NOTE**:
|
||||
> - __\<version\>*__ means the release branch is under active support and will have periodic follow-up patch releases.
|
||||
> - __Latest__ release means the version is the latest release of the newest release branch.
|
||||
> - __Stable__ release means the version is stable and has been widely adopted by users.
|
||||
|
||||
https://github.com/longhorn/longhorn/releases
|
||||
|
||||
| Release | Version | Type | Release Note (Changelog) | Important Note |
|
||||
|-----------|---------|----------------|----------------------------------------------------------------|-------------------------------------------------------------|
|
||||
| **1.5*** | 1.5.1 | Latest | [🔗](https://github.com/longhorn/longhorn/releases/tag/v1.5.1) | [🔗](https://longhorn.io/docs/1.5.1/deploy/important-notes) |
|
||||
| **1.4*** | 1.4.4 | Stable | [🔗](https://github.com/longhorn/longhorn/releases/tag/v1.4.4) | [🔗](https://longhorn.io/docs/1.4.4/deploy/important-notes) |
|
||||
| 1.3 | 1.3.3 | Stable | [🔗](https://github.com/longhorn/longhorn/releases/tag/v1.3.3) | [🔗](https://longhorn.io/docs/1.3.3/deploy/important-notes) |
|
||||
| 1.2 | 1.2.6 | Stable | [🔗](https://github.com/longhorn/longhorn/releases/tag/v1.2.6) | [🔗](https://longhorn.io/docs/1.2.6/deploy/important-notes) |
|
||||
| 1.1 | 1.1.3 | Stable | [🔗](https://github.com/longhorn/longhorn/releases/tag/v1.1.3) | |
|
||||
|
||||
# Roadmap
|
||||
|
||||
https://github.com/longhorn/longhorn/wiki/Roadmap
|
||||
|
||||
# Components
|
||||
|
||||
Longhorn is 100% open source software. Project source code is spread across a number of repos:
|
||||
|
||||
## Build Status
|
||||
* Engine: [](https://drone-publish.longhorn.io/longhorn/longhorn-engine)[](https://goreportcard.com/report/github.com/longhorn/longhorn-engine)[](https://app.fossa.com/projects/custom%2B25850%2Fgithub.com%2Flonghorn%2Flonghorn-engine?ref=badge_shield)
|
||||
* Manager: [](https://drone-publish.longhorn.io/longhorn/longhorn-manager)[](https://goreportcard.com/report/github.com/longhorn/longhorn-manager)[](https://app.fossa.com/projects/custom%2B25850%2Fgithub.com%2Flonghorn%2Flonghorn-manager?ref=badge_shield)
|
||||
* Instance Manager: [](http://drone-publish.longhorn.io/longhorn/longhorn-instance-manager)[](https://goreportcard.com/report/github.com/longhorn/longhorn-instance-manager)[](https://app.fossa.com/projects/custom%2B25850%2Fgithub.com%2Flonghorn%2Flonghorn-instance-manager?ref=badge_shield)
|
||||
* Share Manager: [](http://drone-publish.longhorn.io/longhorn/longhorn-share-manager)[](https://goreportcard.com/report/github.com/longhorn/longhorn-share-manager)[](https://app.fossa.com/projects/custom%2B25850%2Fgithub.com%2Flonghorn%2Flonghorn-share-manager?ref=badge_shield)
|
||||
* Backing Image Manager: [](http://drone-publish.longhorn.io/longhorn/backing-image-manager)[](https://goreportcard.com/report/github.com/longhorn/backing-image-manager)[](https://app.fossa.com/projects/custom%2B25850%2Fgithub.com%2Flonghorn%2Fbacking-image-manager?ref=badge_shield)
|
||||
* UI: [](https://drone-publish.longhorn.io/longhorn/longhorn-ui)[](https://app.fossa.com/projects/custom%2B25850%2Fgithub.com%2Flonghorn%2Flonghorn-ui?ref=badge_shield)
|
||||
* Test: [](http://drone-publish.longhorn.io/longhorn/longhorn-tests)
|
||||
|
||||
## Release Status
|
||||
|
||||
| Release | Version | Type |
|
||||
| --------|---------|----------------|
|
||||
| 1.2 | 1.2.3 | Stable, Latest |
|
||||
| 1.1 | 1.1.3 | Stable, Latest |
|
||||
|
||||
## Get Involved
|
||||
|
||||
### Community Meeting and Office Hours
|
||||
Hosted by the core maintainers of Longhorn: 4th Friday of the every month at 09:00 (CET) or 16:00 (CST) at https://community.cncf.io/longhorn-community/.
|
||||
|
||||
### Longhorn Mailing List
|
||||
Stay up to date on the latest news and events: https://lists.cncf.io/g/cncf-longhorn
|
||||
|
||||
You can read more about the community and its events here: https://github.com/longhorn/community
|
||||
|
||||
## Source code
|
||||
|
||||
Longhorn is 100% open source software. Project source code is spread across a number of repos:
|
||||
|
||||
| Component | What it does | GitHub repo |
|
||||
| :----------------------------- | :--------------------------------------------------------------------- | :------------------------------------------------------------------------------------------ |
|
||||
@ -60,15 +70,21 @@ Longhorn is 100% open source software. Project source code is spread across a nu
|
||||
|
||||

|
||||
|
||||
# Get Started
|
||||
|
||||
## Requirements
|
||||
|
||||
For the installation requirements, refer to the [Longhorn documentation.](https://longhorn.io/docs/latest/deploy/install/#installation-requirements)
|
||||
|
||||
## Installation
|
||||
|
||||
> **NOTE**:
|
||||
> Please note that the master branch is for the upcoming feature release development.
|
||||
> For an official release installation or upgrade, please refer to the below ways.
|
||||
|
||||
Longhorn can be installed on a Kubernetes cluster in several ways:
|
||||
|
||||
- [Rancher catalog app](https://longhorn.io/docs/latest/deploy/install/install-with-rancher/)
|
||||
- [Rancher App Marketplace](https://longhorn.io/docs/latest/deploy/install/install-with-rancher/)
|
||||
- [kubectl](https://longhorn.io/docs/latest/deploy/install/install-with-kubectl/)
|
||||
- [Helm](https://longhorn.io/docs/latest/deploy/install/install-with-helm/)
|
||||
|
||||
@ -76,6 +92,24 @@ Longhorn can be installed on a Kubernetes cluster in several ways:
|
||||
|
||||
The official Longhorn documentation is [here.](https://longhorn.io/docs)
|
||||
|
||||
# Get Involved
|
||||
|
||||
## Discussion, Feedback
|
||||
|
||||
If having any discussions or feedbacks, feel free to [file a discussion](https://github.com/longhorn/longhorn/discussions).
|
||||
|
||||
## Features Request, Bug Reporting
|
||||
|
||||
If having any issues, feel free to [file an issue](https://github.com/longhorn/longhorn/issues/new/choose).
|
||||
We have a weekly community issue review meeting to review all reported issues or enhancement requests.
|
||||
|
||||
When creating a bug issue, please help upload the support bundle to the issue or send to
|
||||
[longhorn-support-bundle](mailto:longhorn-support-bundle@suse.com).
|
||||
|
||||
## Report Vulnerabilities
|
||||
|
||||
If having any vulnerabilities found, please report to [longhorn-security](mailto:longhorn-security@suse.com).
|
||||
|
||||
# Community
|
||||
|
||||
Longhorn is open source software, so contributions are greatly welcome.
|
||||
@ -87,25 +121,17 @@ If you have any feedbacks, feel free to [file an issue](https://github.com/longh
|
||||
If having any discussion, feedbacks, requests, issues or security reports, please follow below ways.
|
||||
We also have a [CNCF Slack channel: longhorn](https://cloud-native.slack.com/messages/longhorn) for discussion.
|
||||
|
||||
## Discussions or Feedbacks
|
||||
## Community Meeting and Office Hours
|
||||
Hosted by the core maintainers of Longhorn: 4th Friday of the every month at 09:00 (CET) or 16:00 (CST) at https://community.cncf.io/longhorn-community/.
|
||||
|
||||
If having any discussions or feedbacks, feel free to [file a discussion](https://github.com/longhorn/longhorn/discussions).
|
||||
## Longhorn Mailing List
|
||||
Stay up to date on the latest news and events: https://lists.cncf.io/g/cncf-longhorn
|
||||
|
||||
## Requests or Issues
|
||||
|
||||
If having any issues, feel free to [file an issue](https://github.com/longhorn/longhorn/issues/new/choose).
|
||||
We have a weekly community issue review meeting to review all reported issues or enhancement requests.
|
||||
|
||||
When creating a bug issue, please help upload the support bundle to the issue or send to
|
||||
[longhorn-support-bundle](mailto:longhorn-support-bundle@suse.com).
|
||||
|
||||
## Report Vulnerabilities
|
||||
|
||||
If having any vulnerabilities found, please report to [longhorn-security](mailto:longhorn-security@suse.com).
|
||||
You can read more about the community and its events here: https://github.com/longhorn/community
|
||||
|
||||
# License
|
||||
|
||||
Copyright (c) 2014-2021 The Longhorn Authors
|
||||
Copyright (c) 2014-2022 The Longhorn Authors
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
|
||||
|
||||
|
@ -1,8 +1,8 @@
|
||||
apiVersion: v1
|
||||
name: longhorn
|
||||
version: 1.2.3
|
||||
appVersion: v1.2.3
|
||||
kubeVersion: ">=1.18.0-0"
|
||||
version: 1.6.0-dev
|
||||
appVersion: v1.6.0-dev
|
||||
kubeVersion: ">=1.21.0-0"
|
||||
description: Longhorn is a distributed block storage system for Kubernetes.
|
||||
keywords:
|
||||
- longhorn
|
||||
|
266
chart/README.md
266
chart/README.md
@ -18,10 +18,24 @@ Longhorn is 100% open source software. Project source code is spread across a nu
|
||||
## Prerequisites
|
||||
|
||||
1. A container runtime compatible with Kubernetes (Docker v1.13+, containerd v1.3.7+, etc.)
|
||||
2. Kubernetes v1.18+
|
||||
2. Kubernetes >= v1.21
|
||||
3. Make sure `bash`, `curl`, `findmnt`, `grep`, `awk` and `blkid` has been installed in all nodes of the Kubernetes cluster.
|
||||
4. Make sure `open-iscsi` has been installed, and the `iscsid` daemon is running on all nodes of the Kubernetes cluster. For GKE, recommended Ubuntu as guest OS image since it contains `open-iscsi` already.
|
||||
|
||||
## Upgrading to Kubernetes v1.25+
|
||||
|
||||
Starting in Kubernetes v1.25, [Pod Security Policies](https://kubernetes.io/docs/concepts/security/pod-security-policy/) have been removed from the Kubernetes API.
|
||||
|
||||
As a result, **before upgrading to Kubernetes v1.25** (or on a fresh install in a Kubernetes v1.25+ cluster), users are expected to perform an in-place upgrade of this chart with `enablePSP` set to `false` if it has been previously set to `true`.
|
||||
|
||||
> **Note:**
|
||||
> If you upgrade your cluster to Kubernetes v1.25+ before removing PSPs via a `helm upgrade` (even if you manually clean up resources), **it will leave the Helm release in a broken state within the cluster such that further Helm operations will not work (`helm uninstall`, `helm upgrade`, etc.).**
|
||||
>
|
||||
> If your charts get stuck in this state, you may have to clean up your Helm release secrets.
|
||||
Upon setting `enablePSP` to false, the chart will remove any PSP resources deployed on its behalf from the cluster. This is the default setting for this chart.
|
||||
|
||||
As a replacement for PSPs, [Pod Security Admission](https://kubernetes.io/docs/concepts/security/pod-security-admission/) should be used. Please consult the Longhorn docs for more details on how to configure your chart release namespaces to work with the new Pod Security Admission and apply Pod Security Standards.
|
||||
|
||||
## Installation
|
||||
1. Add Longhorn chart repository.
|
||||
```
|
||||
@ -49,14 +63,264 @@ helm install longhorn longhorn/longhorn --namespace longhorn-system
|
||||
|
||||
With Helm 2 to uninstall Longhorn.
|
||||
```
|
||||
kubectl -n longhorn-system patch -p '{"value": "true"}' --type=merge lhs deleting-confirmation-flag
|
||||
helm delete longhorn --purge
|
||||
```
|
||||
|
||||
With Helm 3 to uninstall Longhorn.
|
||||
```
|
||||
kubectl -n longhorn-system patch -p '{"value": "true"}' --type=merge lhs deleting-confirmation-flag
|
||||
helm uninstall longhorn -n longhorn-system
|
||||
kubectl delete namespace longhorn-system
|
||||
```
|
||||
|
||||
## Values
|
||||
|
||||
The `values.yaml` contains items used to tweak a deployment of this chart.
|
||||
|
||||
### Cattle Settings
|
||||
|
||||
| Key | Type | Default | Description |
|
||||
|-----|------|---------|-------------|
|
||||
| global.cattle.systemDefaultRegistry | string | `""` | System default registry |
|
||||
| global.cattle.windowsCluster.defaultSetting.systemManagedComponentsNodeSelector | string | `"kubernetes.io/os:linux"` | Node selector for Longhorn system managed components |
|
||||
| global.cattle.windowsCluster.defaultSetting.taintToleration | string | `"cattle.io/os=linux:NoSchedule"` | Toleration for Longhorn system managed components |
|
||||
| global.cattle.windowsCluster.enabled | bool | `false` | Enable this to allow Longhorn to run on the Rancher deployed Windows cluster |
|
||||
| global.cattle.windowsCluster.nodeSelector | object | `{"kubernetes.io/os":"linux"}` | Select Linux nodes to run Longhorn user deployed components |
|
||||
| global.cattle.windowsCluster.tolerations | list | `[{"effect":"NoSchedule","key":"cattle.io/os","operator":"Equal","value":"linux"}]` | Tolerate Linux nodes to run Longhorn user deployed components |
|
||||
|
||||
### Network Policies
|
||||
|
||||
| Key | Type | Default | Description |
|
||||
|-----|------|---------|-------------|
|
||||
| networkPolicies.enabled | bool | `false` | Enable NetworkPolicies to limit access to the Longhorn pods |
|
||||
| networkPolicies.type | string | `"k3s"` | Create the policy based on your distribution to allow access for the ingress. Options: `k3s`, `rke2`, `rke1` |
|
||||
|
||||
### Image Settings
|
||||
|
||||
| Key | Type | Default | Description |
|
||||
|-----|------|---------|-------------|
|
||||
| image.csi.attacher.repository | string | `"longhornio/csi-attacher"` | Specify CSI attacher image repository. Leave blank to autodetect |
|
||||
| image.csi.attacher.tag | string | `"v4.2.0"` | Specify CSI attacher image tag. Leave blank to autodetect |
|
||||
| image.csi.livenessProbe.repository | string | `"longhornio/livenessprobe"` | Specify CSI liveness probe image repository. Leave blank to autodetect |
|
||||
| image.csi.livenessProbe.tag | string | `"v2.9.0"` | Specify CSI liveness probe image tag. Leave blank to autodetect |
|
||||
| image.csi.nodeDriverRegistrar.repository | string | `"longhornio/csi-node-driver-registrar"` | Specify CSI node driver registrar image repository. Leave blank to autodetect |
|
||||
| image.csi.nodeDriverRegistrar.tag | string | `"v2.7.0"` | Specify CSI node driver registrar image tag. Leave blank to autodetect |
|
||||
| image.csi.provisioner.repository | string | `"longhornio/csi-provisioner"` | Specify CSI provisioner image repository. Leave blank to autodetect |
|
||||
| image.csi.provisioner.tag | string | `"v3.4.1"` | Specify CSI provisioner image tag. Leave blank to autodetect |
|
||||
| image.csi.resizer.repository | string | `"longhornio/csi-resizer"` | Specify CSI driver resizer image repository. Leave blank to autodetect |
|
||||
| image.csi.resizer.tag | string | `"v1.7.0"` | Specify CSI driver resizer image tag. Leave blank to autodetect |
|
||||
| image.csi.snapshotter.repository | string | `"longhornio/csi-snapshotter"` | Specify CSI driver snapshotter image repository. Leave blank to autodetect |
|
||||
| image.csi.snapshotter.tag | string | `"v6.2.1"` | Specify CSI driver snapshotter image tag. Leave blank to autodetect. |
|
||||
| image.longhorn.backingImageManager.repository | string | `"longhornio/backing-image-manager"` | Specify Longhorn backing image manager image repository |
|
||||
| image.longhorn.backingImageManager.tag | string | `"master-head"` | Specify Longhorn backing image manager image tag |
|
||||
| image.longhorn.engine.repository | string | `"longhornio/longhorn-engine"` | Specify Longhorn engine image repository |
|
||||
| image.longhorn.engine.tag | string | `"master-head"` | Specify Longhorn engine image tag |
|
||||
| image.longhorn.instanceManager.repository | string | `"longhornio/longhorn-instance-manager"` | Specify Longhorn instance manager image repository |
|
||||
| image.longhorn.instanceManager.tag | string | `"master-head"` | Specify Longhorn instance manager image tag |
|
||||
| image.longhorn.manager.repository | string | `"longhornio/longhorn-manager"` | Specify Longhorn manager image repository |
|
||||
| image.longhorn.manager.tag | string | `"master-head"` | Specify Longhorn manager image tag |
|
||||
| image.longhorn.shareManager.repository | string | `"longhornio/longhorn-share-manager"` | Specify Longhorn share manager image repository |
|
||||
| image.longhorn.shareManager.tag | string | `"master-head"` | Specify Longhorn share manager image tag |
|
||||
| image.longhorn.supportBundleKit.repository | string | `"longhornio/support-bundle-kit"` | Specify Longhorn support bundle manager image repository |
|
||||
| image.longhorn.supportBundleKit.tag | string | `"v0.0.27"` | Specify Longhorn support bundle manager image tag |
|
||||
| image.longhorn.ui.repository | string | `"longhornio/longhorn-ui"` | Specify Longhorn ui image repository |
|
||||
| image.longhorn.ui.tag | string | `"master-head"` | Specify Longhorn ui image tag |
|
||||
| image.openshift.oauthProxy.repository | string | `"quay.io/openshift/origin-oauth-proxy"` | For openshift user. Specify oauth proxy image repository |
|
||||
| image.openshift.oauthProxy.tag | float | `4.13` | For openshift user. Specify oauth proxy image tag. Note: Use your OCP/OKD 4.X Version, Current Stable is 4.13 |
|
||||
| image.pullPolicy | string | `"IfNotPresent"` | Image pull policy which applies to all user deployed Longhorn Components. e.g, Longhorn manager, Longhorn driver, Longhorn UI |
|
||||
|
||||
### Service Settings
|
||||
|
||||
| Key | Description |
|
||||
|-----|-------------|
|
||||
| service.manager.nodePort | NodePort port number (to set explicitly, choose port between 30000-32767) |
|
||||
| service.manager.type | Define Longhorn manager service type. |
|
||||
| service.ui.nodePort | NodePort port number (to set explicitly, choose port between 30000-32767) |
|
||||
| service.ui.type | Define Longhorn UI service type. Options: `ClusterIP`, `NodePort`, `LoadBalancer`, `Rancher-Proxy` |
|
||||
|
||||
### StorageClass Settings
|
||||
|
||||
| Key | Type | Default | Description |
|
||||
|-----|------|---------|-------------|
|
||||
| persistence.backingImage.dataSourceParameters | string | `nil` | Specify the data source parameters for the backing image used in Longhorn StorageClass. This option accepts a json string of a map. e.g., `'{\"url\":\"https://backing-image-example.s3-region.amazonaws.com/test-backing-image\"}'`. |
|
||||
| persistence.backingImage.dataSourceType | string | `nil` | Specify the data source type for the backing image used in Longhorn StorageClass. If the backing image does not exists, Longhorn will use this field to create a backing image. Otherwise, Longhorn will use it to verify the selected backing image. |
|
||||
| persistence.backingImage.enable | bool | `false` | Set backing image for Longhorn StorageClass |
|
||||
| persistence.backingImage.expectedChecksum | string | `nil` | Specify the expected SHA512 checksum of the selected backing image in Longhorn StorageClass |
|
||||
| persistence.backingImage.name | string | `nil` | Specify a backing image that will be used by Longhorn volumes in Longhorn StorageClass. If not exists, the backing image data source type and backing image data source parameters should be specified so that Longhorn will create the backing image before using it |
|
||||
| persistence.defaultClass | bool | `true` | Set Longhorn StorageClass as default |
|
||||
| persistence.defaultClassReplicaCount | int | `3` | Set replica count for Longhorn StorageClass |
|
||||
| persistence.defaultDataLocality | string | `"disabled"` | Set data locality for Longhorn StorageClass. Options: `disabled`, `best-effort` |
|
||||
| persistence.defaultFsType | string | `"ext4"` | Set filesystem type for Longhorn StorageClass |
|
||||
| persistence.defaultMkfsParams | string | `""` | Set mkfs options for Longhorn StorageClass |
|
||||
| persistence.defaultNodeSelector.enable | bool | `false` | Enable Node selector for Longhorn StorageClass |
|
||||
| persistence.defaultNodeSelector.selector | string | `""` | This selector enables only certain nodes having these tags to be used for the volume. e.g. `"storage,fast"` |
|
||||
| persistence.migratable | bool | `false` | Set volume migratable for Longhorn StorageClass |
|
||||
| persistence.reclaimPolicy | string | `"Delete"` | Define reclaim policy. Options: `Retain`, `Delete` |
|
||||
| persistence.recurringJobSelector.enable | bool | `false` | Enable recurring job selector for Longhorn StorageClass |
|
||||
| persistence.recurringJobSelector.jobList | list | `[]` | Recurring job selector list for Longhorn StorageClass. Please be careful of quotes of input. e.g., `[{"name":"backup", "isGroup":true}]` |
|
||||
| persistence.removeSnapshotsDuringFilesystemTrim | string | `"ignored"` | Allow automatically removing snapshots during filesystem trim for Longhorn StorageClass. Options: `ignored`, `enabled`, `disabled` |
|
||||
|
||||
### CSI Settings
|
||||
|
||||
| Key | Description |
|
||||
|-----|-------------|
|
||||
| csi.attacherReplicaCount | Specify replica count of CSI Attacher. Leave blank to use default count: 3 |
|
||||
| csi.kubeletRootDir | Specify kubelet root-dir. Leave blank to autodetect |
|
||||
| csi.provisionerReplicaCount | Specify replica count of CSI Provisioner. Leave blank to use default count: 3 |
|
||||
| csi.resizerReplicaCount | Specify replica count of CSI Resizer. Leave blank to use default count: 3 |
|
||||
| csi.snapshotterReplicaCount | Specify replica count of CSI Snapshotter. Leave blank to use default count: 3 |
|
||||
|
||||
### Longhorn Manager Settings
|
||||
|
||||
Longhorn system contains user deployed components (e.g, Longhorn manager, Longhorn driver, Longhorn UI) and system managed components (e.g, instance manager, engine image, CSI driver, etc.).
|
||||
These settings only apply to Longhorn manager component.
|
||||
|
||||
| Key | Type | Default | Description |
|
||||
|-----|------|---------|-------------|
|
||||
| longhornManager.log.format | string | `"plain"` | Options: `plain`, `json` |
|
||||
| longhornManager.nodeSelector | object | `{}` | Select nodes to run Longhorn manager |
|
||||
| longhornManager.priorityClass | string | `nil` | Priority class for longhorn manager |
|
||||
| longhornManager.serviceAnnotations | object | `{}` | Annotation used in Longhorn manager service |
|
||||
| longhornManager.tolerations | list | `[]` | Tolerate nodes to run Longhorn manager |
|
||||
|
||||
### Longhorn Driver Settings
|
||||
|
||||
Longhorn system contains user deployed components (e.g, Longhorn manager, Longhorn driver, Longhorn UI) and system managed components (e.g, instance manager, engine image, CSI driver, etc.).
|
||||
These settings only apply to Longhorn driver component.
|
||||
|
||||
| Key | Type | Default | Description |
|
||||
|-----|------|---------|-------------|
|
||||
| longhornDriver.nodeSelector | object | `{}` | Select nodes to run Longhorn driver |
|
||||
| longhornDriver.priorityClass | string | `nil` | Priority class for longhorn driver |
|
||||
| longhornDriver.tolerations | list | `[]` | Tolerate nodes to run Longhorn driver |
|
||||
|
||||
### Longhorn UI Settings
|
||||
|
||||
Longhorn system contains user deployed components (e.g, Longhorn manager, Longhorn driver, Longhorn UI) and system managed components (e.g, instance manager, engine image, CSI driver, etc.).
|
||||
These settings only apply to Longhorn UI component.
|
||||
|
||||
| Key | Type | Default | Description |
|
||||
|-----|------|---------|-------------|
|
||||
| longhornUI.nodeSelector | object | `{}` | Select nodes to run Longhorn UI |
|
||||
| longhornUI.priorityClass | string | `nil` | Priority class count for longhorn ui |
|
||||
| longhornUI.replicas | int | `2` | Replica count for longhorn ui |
|
||||
| longhornUI.tolerations | list | `[]` | Tolerate nodes to run Longhorn UI |
|
||||
|
||||
### Ingress Settings
|
||||
|
||||
| Key | Type | Default | Description |
|
||||
|-----|------|---------|-------------|
|
||||
| ingress.annotations | string | `nil` | Ingress annotations done as key:value pairs |
|
||||
| ingress.enabled | bool | `false` | Set to true to enable ingress record generation |
|
||||
| ingress.host | string | `"sslip.io"` | Layer 7 Load Balancer hostname |
|
||||
| ingress.ingressClassName | string | `nil` | Add ingressClassName to the Ingress Can replace the kubernetes.io/ingress.class annotation on v1.18+ |
|
||||
| ingress.path | string | `"/"` | If ingress is enabled you can set the default ingress path then you can access the UI by using the following full path {{host}}+{{path}} |
|
||||
| ingress.secrets | string | `nil` | If you're providing your own certificates, please use this to add the certificates as secrets |
|
||||
| ingress.secureBackends | bool | `false` | Enable this in order to enable that the backend service will be connected at port 443 |
|
||||
| ingress.tls | bool | `false` | Set this to true in order to enable TLS on the ingress record |
|
||||
| ingress.tlsSecret | string | `"longhorn.local-tls"` | If TLS is set to true, you must declare what secret will store the key/certificate for TLS |
|
||||
|
||||
### Private Registry Settings
|
||||
|
||||
Longhorn can be installed in an air gapped environment with private registry settings. Please refer to **Air Gap Installation** in our official site [link](https://longhorn.io/docs)
|
||||
|
||||
| Key | Description |
|
||||
|-----|-------------|
|
||||
| privateRegistry.createSecret | Set `true` to create a new private registry secret |
|
||||
| privateRegistry.registryPasswd | Password used to authenticate to private registry |
|
||||
| privateRegistry.registrySecret | If create a new private registry secret is true, create a Kubernetes secret with this name; else use the existing secret of this name. Use it to pull images from your private registry |
|
||||
| privateRegistry.registryUrl | URL of private registry. Leave blank to apply system default registry |
|
||||
| privateRegistry.registryUser | User used to authenticate to private registry |
|
||||
|
||||
### OS/Kubernetes Distro Settings
|
||||
|
||||
#### Opensift Settings
|
||||
|
||||
Please also refer to this document [ocp-readme](https://github.com/longhorn/longhorn/blob/master/chart/ocp-readme.md) for more details
|
||||
|
||||
| Key | Type | Default | Description |
|
||||
|-----|------|---------|-------------|
|
||||
| openshift.enabled | bool | `false` | Enable when using openshift |
|
||||
| openshift.ui.port | int | `443` | UI port in openshift environment |
|
||||
| openshift.ui.proxy | int | `8443` | UI proxy in openshift environment |
|
||||
| openshift.ui.route | string | `"longhorn-ui"` | UI route in openshift environment |
|
||||
|
||||
### Other Settings
|
||||
|
||||
| Key | Default | Description |
|
||||
|-----|---------|-------------|
|
||||
| annotations | `{}` | Annotations to add to the Longhorn Manager DaemonSet Pods. Optional. |
|
||||
| enablePSP | `false` | For Kubernetes < v1.25, if your cluster enables Pod Security Policy admission controller, set this to `true` to ship longhorn-psp which allow privileged Longhorn pods to start |
|
||||
|
||||
### System Default Settings
|
||||
|
||||
For system default settings, you can first leave blank to use default values which will be applied when installing Longhorn.
|
||||
You can then change them through UI after installation.
|
||||
For more details like types or options, you can refer to **Settings Reference** in our official site [link](https://longhorn.io/docs)
|
||||
|
||||
| Key | Description |
|
||||
|-----|-------------|
|
||||
| defaultSettings.allowEmptyDiskSelectorVolume | Allow Scheduling Empty Disk Selector Volumes To Any Disk |
|
||||
| defaultSettings.allowEmptyNodeSelectorVolume | Allow Scheduling Empty Node Selector Volumes To Any Node |
|
||||
| defaultSettings.allowRecurringJobWhileVolumeDetached | If this setting is enabled, Longhorn will automatically attaches the volume and takes snapshot/backup when it is the time to do recurring snapshot/backup. |
|
||||
| defaultSettings.allowVolumeCreationWithDegradedAvailability | This setting allows user to create and attach a volume that doesn't have all the replicas scheduled at the time of creation. |
|
||||
| defaultSettings.autoCleanupSystemGeneratedSnapshot | This setting enables Longhorn to automatically cleanup the system generated snapshot after replica rebuild is done. |
|
||||
| defaultSettings.autoDeletePodWhenVolumeDetachedUnexpectedly | If enabled, Longhorn will automatically delete the workload pod that is managed by a controller (e.g. deployment, statefulset, daemonset, etc...) when Longhorn volume is detached unexpectedly (e.g. during Kubernetes upgrade, Docker reboot, or network disconnect). By deleting the pod, its controller restarts the pod and Kubernetes handles volume reattachment and remount. |
|
||||
| defaultSettings.autoSalvage | If enabled, volumes will be automatically salvaged when all the replicas become faulty e.g. due to network disconnection. Longhorn will try to figure out which replica(s) are usable, then use them for the volume. By default true. |
|
||||
| defaultSettings.backingImageCleanupWaitInterval | This interval in minutes determines how long Longhorn will wait before cleaning up the backing image file when there is no replica in the disk using it. |
|
||||
| defaultSettings.backingImageRecoveryWaitInterval | This interval in seconds determines how long Longhorn will wait before re-downloading the backing image file when all disk files of this backing image become failed or unknown. |
|
||||
| defaultSettings.backupCompressionMethod | This setting allows users to specify backup compression method. |
|
||||
| defaultSettings.backupConcurrentLimit | This setting controls how many worker threads per backup concurrently. |
|
||||
| defaultSettings.backupTarget | The endpoint used to access the backupstore. Available: NFS, CIFS, AWS, GCP, AZURE. |
|
||||
| defaultSettings.backupTargetCredentialSecret | The name of the Kubernetes secret associated with the backup target. |
|
||||
| defaultSettings.backupstorePollInterval | In seconds. The backupstore poll interval determines how often Longhorn checks the backupstore for new backups. Set to 0 to disable the polling. By default 300. |
|
||||
| defaultSettings.concurrentAutomaticEngineUpgradePerNodeLimit | This setting controls how Longhorn automatically upgrades volumes' engines to the new default engine image after upgrading Longhorn manager. The value of this setting specifies the maximum number of engines per node that are allowed to upgrade to the default engine image at the same time. If the value is 0, Longhorn will not automatically upgrade volumes' engines to default version. |
|
||||
| defaultSettings.concurrentReplicaRebuildPerNodeLimit | This setting controls how many replicas on a node can be rebuilt simultaneously. |
|
||||
| defaultSettings.concurrentVolumeBackupRestorePerNodeLimit | This setting controls how many volumes on a node can restore the backup concurrently. Set the value to **0** to disable backup restore. |
|
||||
| defaultSettings.createDefaultDiskLabeledNodes | Create default Disk automatically only on Nodes with the label "node.longhorn.io/create-default-disk=true" if no other disks exist. If disabled, the default disk will be created on all new nodes when each node is first added. |
|
||||
| defaultSettings.defaultDataLocality | Longhorn volume has data locality if there is a local replica of the volume on the same node as the pod which is using the volume. |
|
||||
| defaultSettings.defaultDataPath | Default path to use for storing data on a host. By default "/var/lib/longhorn/" |
|
||||
| defaultSettings.defaultLonghornStaticStorageClass | The 'storageClassName' is given to PVs and PVCs that are created for an existing Longhorn volume. The StorageClass name can also be used as a label, so it is possible to use a Longhorn StorageClass to bind a workload to an existing PV without creating a Kubernetes StorageClass object. By default 'longhorn-static'. |
|
||||
| defaultSettings.defaultReplicaCount | The default number of replicas when a volume is created from the Longhorn UI. For Kubernetes configuration, update the `numberOfReplicas` in the StorageClass. By default 3. |
|
||||
| defaultSettings.deletingConfirmationFlag | This flag is designed to prevent Longhorn from being accidentally uninstalled which will lead to data lost. |
|
||||
| defaultSettings.disableRevisionCounter | This setting is only for volumes created by UI. By default, this is false meaning there will be a reivision counter file to track every write to the volume. During salvage recovering Longhorn will pick the replica with largest reivision counter as candidate to recover the whole volume. If revision counter is disabled, Longhorn will not track every write to the volume. During the salvage recovering, Longhorn will use the 'volume-head-xxx.img' file last modification time and file size to pick the replica candidate to recover the whole volume. |
|
||||
| defaultSettings.disableSchedulingOnCordonedNode | Disable Longhorn manager to schedule replica on Kubernetes cordoned node. By default true. |
|
||||
| defaultSettings.engineReplicaTimeout | In seconds. The setting specifies the timeout between the engine and replica(s), and the value should be between 8 to 30 seconds. The default value is 8 seconds. |
|
||||
| defaultSettings.failedBackupTTL | In minutes. This setting determines how long Longhorn will keep the backup resource that was failed. Set to 0 to disable the auto-deletion. |
|
||||
| defaultSettings.fastReplicaRebuildEnabled | This feature supports the fast replica rebuilding. It relies on the checksum of snapshot disk files, so setting the snapshot-data-integrity to **enable** or **fast-check** is a prerequisite. |
|
||||
| defaultSettings.guaranteedInstanceManagerCPU | This integer value indicates how many percentage of the total allocatable CPU on each node will be reserved for each instance manager Pod. You can leave it with the default value, which is 12%. |
|
||||
| defaultSettings.kubernetesClusterAutoscalerEnabled | Enabling this setting will notify Longhorn that the cluster is using Kubernetes Cluster Autoscaler. |
|
||||
| defaultSettings.logLevel | The log level Panic, Fatal, Error, Warn, Info, Debug, Trace used in longhorn manager. Default to Info. |
|
||||
| defaultSettings.nodeDownPodDeletionPolicy | Defines the Longhorn action when a Volume is stuck with a StatefulSet/Deployment Pod on a node that is down. |
|
||||
| defaultSettings.nodeDrainPolicy | Define the policy to use when a node with the last healthy replica of a volume is drained. |
|
||||
| defaultSettings.offlineReplicaRebuilding | This setting allows users to enable the offline replica rebuilding for volumes using v2 data engine. |
|
||||
| defaultSettings.orphanAutoDeletion | This setting allows Longhorn to delete the orphan resource and its corresponding orphaned data automatically like stale replicas. Orphan resources on down or unknown nodes will not be cleaned up automatically. |
|
||||
| defaultSettings.priorityClass | priorityClass for longhorn system componentss |
|
||||
| defaultSettings.recurringFailedJobsHistoryLimit | This setting specifies how many failed backup or snapshot job histories should be retained. History will not be retained if the value is 0. |
|
||||
| defaultSettings.recurringSuccessfulJobsHistoryLimit | This setting specifies how many successful backup or snapshot job histories should be retained. History will not be retained if the value is 0. |
|
||||
| defaultSettings.removeSnapshotsDuringFilesystemTrim | This setting allows Longhorn filesystem trim feature to automatically mark the latest snapshot and its ancestors as removed and stops at the snapshot containing multiple children. |
|
||||
| defaultSettings.replicaAutoBalance | Enable this setting automatically rebalances replicas when discovered an available node. |
|
||||
| defaultSettings.replicaDiskSoftAntiAffinity | Allow scheduling on disks with existing healthy replicas of the same volume. By default true. |
|
||||
| defaultSettings.replicaFileSyncHttpClientTimeout | In seconds. The setting specifies the HTTP client timeout to the file sync server. |
|
||||
| defaultSettings.replicaReplenishmentWaitInterval | In seconds. The interval determines how long Longhorn will wait at least in order to reuse the existing data on a failed replica rather than directly creating a new replica for a degraded volume. |
|
||||
| defaultSettings.replicaSoftAntiAffinity | Allow scheduling on nodes with existing healthy replicas of the same volume. By default false. |
|
||||
| defaultSettings.replicaZoneSoftAntiAffinity | Allow scheduling new Replicas of Volume to the Nodes in the same Zone as existing healthy Replicas. Nodes don't belong to any Zone will be treated as in the same Zone. Notice that Longhorn relies on label `topology.kubernetes.io/zone=<Zone name of the node>` in the Kubernetes node object to identify the zone. By default true. |
|
||||
| defaultSettings.restoreConcurrentLimit | This setting controls how many worker threads per restore concurrently. |
|
||||
| defaultSettings.restoreVolumeRecurringJobs | Restore recurring jobs from the backup volume on the backup target and create recurring jobs if not exist during a backup restoration. |
|
||||
| defaultSettings.snapshotDataIntegrity | This setting allows users to enable or disable snapshot hashing and data integrity checking. |
|
||||
| defaultSettings.snapshotDataIntegrityCronjob | Unix-cron string format. The setting specifies when Longhorn checks the data integrity of snapshot disk files. |
|
||||
| defaultSettings.snapshotDataIntegrityImmediateCheckAfterSnapshotCreation | Hashing snapshot disk files impacts the performance of the system. The immediate snapshot hashing and checking can be disabled to minimize the impact after creating a snapshot. |
|
||||
| defaultSettings.storageMinimalAvailablePercentage | If the minimum available disk capacity exceeds the actual percentage of available disk capacity, the disk becomes unschedulable until more space is freed up. By default 25. |
|
||||
| defaultSettings.storageNetwork | Longhorn uses the storage network for in-cluster data traffic. Leave this blank to use the Kubernetes cluster network. |
|
||||
| defaultSettings.storageOverProvisioningPercentage | The over-provisioning percentage defines how much storage can be allocated relative to the hard drive's capacity. By default 200. |
|
||||
| defaultSettings.storageReservedPercentageForDefaultDisk | The reserved percentage specifies the percentage of disk space that will not be allocated to the default disk on each new Longhorn node. |
|
||||
| defaultSettings.supportBundleFailedHistoryLimit | This setting specifies how many failed support bundles can exist in the cluster. Set this value to **0** to have Longhorn automatically purge all failed support bundles. |
|
||||
| defaultSettings.systemManagedComponentsNodeSelector | nodeSelector for longhorn system components |
|
||||
| defaultSettings.systemManagedPodsImagePullPolicy | This setting defines the Image Pull Policy of Longhorn system managed pod. e.g. instance manager, engine image, CSI driver, etc. The new Image Pull Policy will only apply after the system managed pods restart. |
|
||||
| defaultSettings.taintToleration | taintToleration for longhorn system components |
|
||||
| defaultSettings.upgradeChecker | Upgrade Checker will check for new Longhorn version periodically. When there is a new version available, a notification will appear in the UI. By default true. |
|
||||
| defaultSettings.v2DataEngine | This allows users to activate v2 data engine based on SPDK. Currently, it is in the preview phase and should not be utilized in a production environment. |
|
||||
|
||||
---
|
||||
Please see [link](https://github.com/longhorn/longhorn) for more information.
|
||||
|
253
chart/README.md.gotmpl
Normal file
253
chart/README.md.gotmpl
Normal file
@ -0,0 +1,253 @@
|
||||
# Longhorn Chart
|
||||
|
||||
> **Important**: Please install the Longhorn chart in the `longhorn-system` namespace only.
|
||||
|
||||
> **Warning**: Longhorn doesn't support downgrading from a higher version to a lower version.
|
||||
|
||||
## Source Code
|
||||
|
||||
Longhorn is 100% open source software. Project source code is spread across a number of repos:
|
||||
|
||||
1. Longhorn Engine -- Core controller/replica logic https://github.com/longhorn/longhorn-engine
|
||||
2. Longhorn Instance Manager -- Controller/replica instance lifecycle management https://github.com/longhorn/longhorn-instance-manager
|
||||
3. Longhorn Share Manager -- NFS provisioner that exposes Longhorn volumes as ReadWriteMany volumes. https://github.com/longhorn/longhorn-share-manager
|
||||
4. Backing Image Manager -- Backing image file lifecycle management. https://github.com/longhorn/backing-image-manager
|
||||
5. Longhorn Manager -- Longhorn orchestration, includes CSI driver for Kubernetes https://github.com/longhorn/longhorn-manager
|
||||
6. Longhorn UI -- Dashboard https://github.com/longhorn/longhorn-ui
|
||||
|
||||
## Prerequisites
|
||||
|
||||
1. A container runtime compatible with Kubernetes (Docker v1.13+, containerd v1.3.7+, etc.)
|
||||
2. Kubernetes >= v1.21
|
||||
3. Make sure `bash`, `curl`, `findmnt`, `grep`, `awk` and `blkid` has been installed in all nodes of the Kubernetes cluster.
|
||||
4. Make sure `open-iscsi` has been installed, and the `iscsid` daemon is running on all nodes of the Kubernetes cluster. For GKE, recommended Ubuntu as guest OS image since it contains `open-iscsi` already.
|
||||
|
||||
## Upgrading to Kubernetes v1.25+
|
||||
|
||||
Starting in Kubernetes v1.25, [Pod Security Policies](https://kubernetes.io/docs/concepts/security/pod-security-policy/) have been removed from the Kubernetes API.
|
||||
|
||||
As a result, **before upgrading to Kubernetes v1.25** (or on a fresh install in a Kubernetes v1.25+ cluster), users are expected to perform an in-place upgrade of this chart with `enablePSP` set to `false` if it has been previously set to `true`.
|
||||
|
||||
> **Note:**
|
||||
> If you upgrade your cluster to Kubernetes v1.25+ before removing PSPs via a `helm upgrade` (even if you manually clean up resources), **it will leave the Helm release in a broken state within the cluster such that further Helm operations will not work (`helm uninstall`, `helm upgrade`, etc.).**
|
||||
>
|
||||
> If your charts get stuck in this state, you may have to clean up your Helm release secrets.
|
||||
Upon setting `enablePSP` to false, the chart will remove any PSP resources deployed on its behalf from the cluster. This is the default setting for this chart.
|
||||
|
||||
As a replacement for PSPs, [Pod Security Admission](https://kubernetes.io/docs/concepts/security/pod-security-admission/) should be used. Please consult the Longhorn docs for more details on how to configure your chart release namespaces to work with the new Pod Security Admission and apply Pod Security Standards.
|
||||
|
||||
## Installation
|
||||
1. Add Longhorn chart repository.
|
||||
```
|
||||
helm repo add longhorn https://charts.longhorn.io
|
||||
```
|
||||
|
||||
2. Update local Longhorn chart information from chart repository.
|
||||
```
|
||||
helm repo update
|
||||
```
|
||||
|
||||
3. Install Longhorn chart.
|
||||
- With Helm 2, the following command will create the `longhorn-system` namespace and install the Longhorn chart together.
|
||||
```
|
||||
helm install longhorn/longhorn --name longhorn --namespace longhorn-system
|
||||
```
|
||||
- With Helm 3, the following commands will create the `longhorn-system` namespace first, then install the Longhorn chart.
|
||||
|
||||
```
|
||||
kubectl create namespace longhorn-system
|
||||
helm install longhorn longhorn/longhorn --namespace longhorn-system
|
||||
```
|
||||
|
||||
## Uninstallation
|
||||
|
||||
With Helm 2 to uninstall Longhorn.
|
||||
```
|
||||
kubectl -n longhorn-system patch -p '{"value": "true"}' --type=merge lhs deleting-confirmation-flag
|
||||
helm delete longhorn --purge
|
||||
```
|
||||
|
||||
With Helm 3 to uninstall Longhorn.
|
||||
```
|
||||
kubectl -n longhorn-system patch -p '{"value": "true"}' --type=merge lhs deleting-confirmation-flag
|
||||
helm uninstall longhorn -n longhorn-system
|
||||
kubectl delete namespace longhorn-system
|
||||
```
|
||||
|
||||
## Values
|
||||
|
||||
The `values.yaml` contains items used to tweak a deployment of this chart.
|
||||
|
||||
### Cattle Settings
|
||||
|
||||
| Key | Type | Default | Description |
|
||||
|-----|------|---------|-------------|
|
||||
{{- range .Values }}
|
||||
{{- if hasPrefix "global" .Key }}
|
||||
| {{ .Key }} | {{ .Type }} | {{ if .Default }}{{ .Default }}{{ else }}{{ .AutoDefault }}{{ end }} | {{ if .Description }}{{ .Description }}{{ else }}{{ .AutoDescription }}{{ end }} |
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
### Network Policies
|
||||
|
||||
| Key | Type | Default | Description |
|
||||
|-----|------|---------|-------------|
|
||||
{{- range .Values }}
|
||||
{{- if hasPrefix "networkPolicies" .Key }}
|
||||
| {{ .Key }} | {{ .Type }} | {{ if .Default }}{{ .Default }}{{ else }}{{ .AutoDefault }}{{ end }} | {{ if .Description }}{{ .Description }}{{ else }}{{ .AutoDescription }}{{ end }} |
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
### Image Settings
|
||||
|
||||
| Key | Type | Default | Description |
|
||||
|-----|------|---------|-------------|
|
||||
{{- range .Values }}
|
||||
{{- if hasPrefix "image" .Key }}
|
||||
| {{ .Key }} | {{ .Type }} | {{ if .Default }}{{ .Default }}{{ else }}{{ .AutoDefault }}{{ end }} | {{ if .Description }}{{ .Description }}{{ else }}{{ .AutoDescription }}{{ end }} |
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
### Service Settings
|
||||
|
||||
| Key | Description |
|
||||
|-----|-------------|
|
||||
{{- range .Values }}
|
||||
{{- if (and (hasPrefix "service" .Key) (not (contains "Account" .Key))) }}
|
||||
| {{ .Key }} | {{ if .Description }}{{ .Description }}{{ else }}{{ .AutoDescription }}{{ end }} |
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
### StorageClass Settings
|
||||
|
||||
| Key | Type | Default | Description |
|
||||
|-----|------|---------|-------------|
|
||||
{{- range .Values }}
|
||||
{{- if hasPrefix "persistence" .Key }}
|
||||
| {{ .Key }} | {{ .Type }} | {{ if .Default }}{{ .Default }}{{ else }}{{ .AutoDefault }}{{ end }} | {{ if .Description }}{{ .Description }}{{ else }}{{ .AutoDescription }}{{ end }} |
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
### CSI Settings
|
||||
|
||||
| Key | Description |
|
||||
|-----|-------------|
|
||||
{{- range .Values }}
|
||||
{{- if hasPrefix "csi" .Key }}
|
||||
| {{ .Key }} | {{ if .Description }}{{ .Description }}{{ else }}{{ .AutoDescription }}{{ end }} |
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
### Longhorn Manager Settings
|
||||
|
||||
Longhorn system contains user deployed components (e.g, Longhorn manager, Longhorn driver, Longhorn UI) and system managed components (e.g, instance manager, engine image, CSI driver, etc.).
|
||||
These settings only apply to Longhorn manager component.
|
||||
|
||||
| Key | Type | Default | Description |
|
||||
|-----|------|---------|-------------|
|
||||
{{- range .Values }}
|
||||
{{- if hasPrefix "longhornManager" .Key }}
|
||||
| {{ .Key }} | {{ .Type }} | {{ if .Default }}{{ .Default }}{{ else }}{{ .AutoDefault }}{{ end }} | {{ if .Description }}{{ .Description }}{{ else }}{{ .AutoDescription }}{{ end }} |
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
### Longhorn Driver Settings
|
||||
|
||||
Longhorn system contains user deployed components (e.g, Longhorn manager, Longhorn driver, Longhorn UI) and system managed components (e.g, instance manager, engine image, CSI driver, etc.).
|
||||
These settings only apply to Longhorn driver component.
|
||||
|
||||
| Key | Type | Default | Description |
|
||||
|-----|------|---------|-------------|
|
||||
{{- range .Values }}
|
||||
{{- if hasPrefix "longhornDriver" .Key }}
|
||||
| {{ .Key }} | {{ .Type }} | {{ if .Default }}{{ .Default }}{{ else }}{{ .AutoDefault }}{{ end }} | {{ if .Description }}{{ .Description }}{{ else }}{{ .AutoDescription }}{{ end }} |
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
### Longhorn UI Settings
|
||||
|
||||
Longhorn system contains user deployed components (e.g, Longhorn manager, Longhorn driver, Longhorn UI) and system managed components (e.g, instance manager, engine image, CSI driver, etc.).
|
||||
These settings only apply to Longhorn UI component.
|
||||
|
||||
| Key | Type | Default | Description |
|
||||
|-----|------|---------|-------------|
|
||||
{{- range .Values }}
|
||||
{{- if hasPrefix "longhornUI" .Key }}
|
||||
| {{ .Key }} | {{ .Type }} | {{ if .Default }}{{ .Default }}{{ else }}{{ .AutoDefault }}{{ end }} | {{ if .Description }}{{ .Description }}{{ else }}{{ .AutoDescription }}{{ end }} |
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
### Ingress Settings
|
||||
|
||||
| Key | Type | Default | Description |
|
||||
|-----|------|---------|-------------|
|
||||
{{- range .Values }}
|
||||
{{- if hasPrefix "ingress" .Key }}
|
||||
| {{ .Key }} | {{ .Type }} | {{ if .Default }}{{ .Default }}{{ else }}{{ .AutoDefault }}{{ end }} | {{ if .Description }}{{ .Description }}{{ else }}{{ .AutoDescription }}{{ end }} |
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
### Private Registry Settings
|
||||
|
||||
Longhorn can be installed in an air gapped environment with private registry settings. Please refer to **Air Gap Installation** in our official site [link](https://longhorn.io/docs)
|
||||
|
||||
| Key | Description |
|
||||
|-----|-------------|
|
||||
{{- range .Values }}
|
||||
{{- if hasPrefix "privateRegistry" .Key }}
|
||||
| {{ .Key }} | {{ if .Description }}{{ .Description }}{{ else }}{{ .AutoDescription }}{{ end }} |
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
### OS/Kubernetes Distro Settings
|
||||
|
||||
#### Opensift Settings
|
||||
|
||||
Please also refer to this document [ocp-readme](https://github.com/longhorn/longhorn/blob/master/chart/ocp-readme.md) for more details
|
||||
|
||||
| Key | Type | Default | Description |
|
||||
|-----|------|---------|-------------|
|
||||
{{- range .Values }}
|
||||
{{- if hasPrefix "openshift" .Key }}
|
||||
| {{ .Key }} | {{ .Type }} | {{ if .Default }}{{ .Default }}{{ else }}{{ .AutoDefault }}{{ end }} | {{ if .Description }}{{ .Description }}{{ else }}{{ .AutoDescription }}{{ end }} |
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
### Other Settings
|
||||
|
||||
| Key | Default | Description |
|
||||
|-----|---------|-------------|
|
||||
{{- range .Values }}
|
||||
{{- if not (or (hasPrefix "defaultSettings" .Key)
|
||||
(hasPrefix "networkPolicies" .Key)
|
||||
(hasPrefix "image" .Key)
|
||||
(hasPrefix "service" .Key)
|
||||
(hasPrefix "persistence" .Key)
|
||||
(hasPrefix "csi" .Key)
|
||||
(hasPrefix "longhornManager" .Key)
|
||||
(hasPrefix "longhornDriver" .Key)
|
||||
(hasPrefix "longhornUI" .Key)
|
||||
(hasPrefix "privateRegistry" .Key)
|
||||
(hasPrefix "ingress" .Key)
|
||||
(hasPrefix "openshift" .Key)
|
||||
(hasPrefix "global" .Key)) }}
|
||||
| {{ .Key }} | {{ if .Default }}{{ .Default }}{{ else }}{{ .AutoDefault }}{{ end }} | {{ if .Description }}{{ .Description }}{{ else }}{{ .AutoDescription }}{{ end }} |
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
### System Default Settings
|
||||
|
||||
For system default settings, you can first leave blank to use default values which will be applied when installing Longhorn.
|
||||
You can then change them through UI after installation.
|
||||
For more details like types or options, you can refer to **Settings Reference** in our official site [link](https://longhorn.io/docs)
|
||||
|
||||
| Key | Description |
|
||||
|-----|-------------|
|
||||
{{- range .Values }}
|
||||
{{- if hasPrefix "defaultSettings" .Key }}
|
||||
| {{ .Key }} | {{ if .Description }}{{ .Description }}{{ else }}{{ .AutoDescription }}{{ end }} |
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
---
|
||||
Please see [link](https://github.com/longhorn/longhorn) for more information.
|
177
chart/ocp-readme.md
Normal file
177
chart/ocp-readme.md
Normal file
@ -0,0 +1,177 @@
|
||||
# OpenShift / OKD Extra Configuration Steps
|
||||
|
||||
- [OpenShift / OKD Extra Configuration Steps](#openshift--okd-extra-configuration-steps)
|
||||
- [Notes](#notes)
|
||||
- [Known Issues](#known-issues)
|
||||
- [Preparing Nodes (Optional)](#preparing-nodes-optional)
|
||||
- [Default /var/lib/longhorn setup](#default-varliblonghorn-setup)
|
||||
- [Separate /var/mnt/longhorn setup](#separate-varmntlonghorn-setup)
|
||||
- [Create Filesystem](#create-filesystem)
|
||||
- [Mounting Disk On Boot](#mounting-disk-on-boot)
|
||||
- [Label and Annotate Nodes](#label-and-annotate-nodes)
|
||||
- [Example values.yaml](#example-valuesyaml)
|
||||
- [Installation](#installation)
|
||||
- [Refs](#refs)
|
||||
|
||||
## Notes
|
||||
|
||||
Main changes and tasks for OCP are:
|
||||
|
||||
- On OCP / OKD, the Operating System is Managed by the Cluster
|
||||
- OCP Imposes [Security Context Constraints](https://docs.openshift.com/container-platform/4.11/authentication/managing-security-context-constraints.html)
|
||||
- This requires everything to run with the least privilege possible. For the moment every component has been given access to run as higher privilege.
|
||||
- Something to circle back on is network polices and which components can have their privileges reduced without impacting functionality.
|
||||
- The UI probably can be for example.
|
||||
- openshift/oauth-proxy for authentication to the Longhorn Ui
|
||||
- **⚠️** Currently Scoped to Authenticated Users that can delete a longhorn settings object.
|
||||
- **⚠️** Since the UI it self is not protected, network policies will need to be created to prevent namespace <--> namespace communication against the pod or service object directly.
|
||||
- Anyone with access to the UI Deployment can remove the route restriction. (Namespace Scoped Admin)
|
||||
- Option to use separate disk in /var/mnt/longhorn & MachineConfig file to mount /var/mnt/longhorn
|
||||
- Adding finalizers for mount propagation
|
||||
|
||||
## Known Issues
|
||||
|
||||
- General Feature/Issue Thread
|
||||
- [[FEATURE] Deploying Longhorn on OKD/Openshift](https://github.com/longhorn/longhorn/issues/1831)
|
||||
- 4.10 / 1.23:
|
||||
- 4.10.0-0.okd-2022-03-07-131213 to 4.10.0-0.okd-2022-07-09-073606
|
||||
- Tested, No Known Issues
|
||||
- 4.11 / 1.24:
|
||||
- 4.11.0-0.okd-2022-07-27-052000 to 4.11.0-0.okd-2022-11-19-050030
|
||||
- Tested, No Known Issues
|
||||
- 4.11.0-0.okd-2022-12-02-145640, 4.11.0-0.okd-2023-01-14-152430:
|
||||
- Workaround: [[BUG] Volumes Stuck in Attach/Detach Loop](https://github.com/longhorn/longhorn/issues/4988)
|
||||
- [MachineConfig Patch](https://github.com/longhorn/longhorn/issues/4988#issuecomment-1345676772)
|
||||
- 4.12 / 1.25:
|
||||
- 4.12.0-0.okd-2022-12-05-210624 to 4.12.0-0.okd-2023-01-20-101927
|
||||
- Tested, No Known Issues
|
||||
- 4.12.0-0.okd-2023-01-21-055900 to 4.12.0-0.okd-2023-02-18-033438:
|
||||
- Workaround: [[BUG] Volumes Stuck in Attach/Detach Loop](https://github.com/longhorn/longhorn/issues/4988)
|
||||
- [MachineConfig Patch](https://github.com/longhorn/longhorn/issues/4988#issuecomment-1345676772)
|
||||
- 4.12.0-0.okd-2023-03-05-022504 - 4.12.0-0.okd-2023-04-16-041331:
|
||||
- Tested, No Known Issues
|
||||
- 4.13 / 1.26:
|
||||
- 4.13.0-0.okd-2023-05-03-001308 - 4.13.0-0.okd-2023-08-18-135805:
|
||||
- Tested, No Known Issues
|
||||
- 4.14 / 1.27:
|
||||
- 4.14.0-0.okd-2023-08-12-022330 - 4.14.0-0.okd-2023-10-28-073550:
|
||||
- Tested, No Known Issues
|
||||
|
||||
## Preparing Nodes (Optional)
|
||||
|
||||
Only required if you require additional customizations, such as storage-less nodes, or secondary disks.
|
||||
|
||||
### Default /var/lib/longhorn setup
|
||||
|
||||
Label each node for storage with:
|
||||
|
||||
```bash
|
||||
oc get nodes --no-headers | awk '{print $1}'
|
||||
|
||||
export NODE="worker-0"
|
||||
oc label node "${NODE}" node.longhorn.io/create-default-disk=true
|
||||
```
|
||||
|
||||
### Separate /var/mnt/longhorn setup
|
||||
|
||||
#### Create Filesystem
|
||||
|
||||
On the storage nodes create a filesystem with the label longhorn:
|
||||
|
||||
```bash
|
||||
oc get nodes --no-headers | awk '{print $1}'
|
||||
|
||||
export NODE="worker-0"
|
||||
oc debug node/${NODE} -t -- chroot /host bash
|
||||
|
||||
# Validate Target Drive is Present
|
||||
lsblk
|
||||
|
||||
export DRIVE="sdb" #vdb
|
||||
sudo mkfs.ext4 -L longhorn /dev/${DRIVE}
|
||||
```
|
||||
|
||||
> ⚠️ Note: If you add New Nodes After the below Machine Config is applied, you will need to also reboot the node.
|
||||
|
||||
#### Mounting Disk On Boot
|
||||
|
||||
The Secondary Drive needs to be mounted on every boot. Save the Concents and Apply the MachineConfig with `oc apply -f`:
|
||||
|
||||
> ⚠️ This will trigger an machine config profile update and reboot all worker nodes on the cluster
|
||||
|
||||
```yaml
|
||||
apiVersion: machineconfiguration.openshift.io/v1
|
||||
kind: MachineConfig
|
||||
metadata:
|
||||
labels:
|
||||
machineconfiguration.openshift.io/role: worker
|
||||
name: 71-mount-storage-worker
|
||||
spec:
|
||||
config:
|
||||
ignition:
|
||||
version: 3.2.0
|
||||
systemd:
|
||||
units:
|
||||
- name: var-mnt-longhorn.mount
|
||||
enabled: true
|
||||
contents: |
|
||||
[Unit]
|
||||
Before=local-fs.target
|
||||
[Mount]
|
||||
Where=/var/mnt/longhorn
|
||||
What=/dev/disk/by-label/longhorn
|
||||
Options=rw,relatime,discard
|
||||
[Install]
|
||||
WantedBy=local-fs.target
|
||||
```
|
||||
|
||||
#### Label and Annotate Nodes
|
||||
|
||||
Label and annotate storage nodes like this:
|
||||
|
||||
```bash
|
||||
oc get nodes --no-headers | awk '{print $1}'
|
||||
|
||||
export NODE="worker-0"
|
||||
oc annotate node ${NODE} --overwrite node.longhorn.io/default-disks-config='[{"path":"/var/mnt/longhorn","allowScheduling":true}]'
|
||||
oc label node ${NODE} node.longhorn.io/create-default-disk=config
|
||||
```
|
||||
|
||||
## Example values.yaml
|
||||
|
||||
Minimum Adjustments Required
|
||||
|
||||
```yaml
|
||||
openshift:
|
||||
oauthProxy:
|
||||
repository: quay.io/openshift/origin-oauth-proxy
|
||||
tag: 4.14 # Use Your OCP/OKD 4.X Version, Current Stable is 4.14
|
||||
|
||||
# defaultSettings: # Preparing nodes (Optional)
|
||||
# createDefaultDiskLabeledNodes: true
|
||||
|
||||
openshift:
|
||||
enabled: true
|
||||
ui:
|
||||
route: "longhorn-ui"
|
||||
port: 443
|
||||
proxy: 8443
|
||||
```
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
# helm template ./chart/ --namespace longhorn-system --values ./chart/values.yaml --no-hooks > longhorn.yaml # Local Testing
|
||||
helm template longhorn --namespace longhorn-system --values values.yaml --no-hooks > longhorn.yaml
|
||||
oc create namespace longhorn-system -o yaml --dry-run=client | oc apply -f -
|
||||
oc apply -f longhorn.yaml -n longhorn-system
|
||||
```
|
||||
|
||||
## Refs
|
||||
|
||||
- <https://docs.openshift.com/container-platform/4.11/storage/persistent_storage/persistent-storage-iscsi.html>
|
||||
- <https://docs.okd.io/4.11/storage/persistent_storage/persistent-storage-iscsi.html>
|
||||
- okd 4.5: <https://github.com/longhorn/longhorn/issues/1831#issuecomment-702690613>
|
||||
- okd 4.6: <https://github.com/longhorn/longhorn/issues/1831#issuecomment-765884631>
|
||||
- oauth-proxy: <https://github.com/openshift/oauth-proxy/blob/master/contrib/sidecar.yaml>
|
||||
- <https://github.com/longhorn/longhorn/issues/1831>
|
@ -17,7 +17,7 @@ questions:
|
||||
label: Longhorn Manager Image Repository
|
||||
group: "Longhorn Images Settings"
|
||||
- variable: image.longhorn.manager.tag
|
||||
default: v1.2.3
|
||||
default: master-head
|
||||
description: "Specify Longhorn Manager Image Tag"
|
||||
type: string
|
||||
label: Longhorn Manager Image Tag
|
||||
@ -29,7 +29,7 @@ questions:
|
||||
label: Longhorn Engine Image Repository
|
||||
group: "Longhorn Images Settings"
|
||||
- variable: image.longhorn.engine.tag
|
||||
default: v1.2.3
|
||||
default: master-head
|
||||
description: "Specify Longhorn Engine Image Tag"
|
||||
type: string
|
||||
label: Longhorn Engine Image Tag
|
||||
@ -41,7 +41,7 @@ questions:
|
||||
label: Longhorn UI Image Repository
|
||||
group: "Longhorn Images Settings"
|
||||
- variable: image.longhorn.ui.tag
|
||||
default: v1.2.3
|
||||
default: master-head
|
||||
description: "Specify Longhorn UI Image Tag"
|
||||
type: string
|
||||
label: Longhorn UI Image Tag
|
||||
@ -53,7 +53,7 @@ questions:
|
||||
label: Longhorn Instance Manager Image Repository
|
||||
group: "Longhorn Images Settings"
|
||||
- variable: image.longhorn.instanceManager.tag
|
||||
default: v1_20211210
|
||||
default: v2_20221123
|
||||
description: "Specify Longhorn Instance Manager Image Tag"
|
||||
type: string
|
||||
label: Longhorn Instance Manager Image Tag
|
||||
@ -65,7 +65,7 @@ questions:
|
||||
label: Longhorn Share Manager Image Repository
|
||||
group: "Longhorn Images Settings"
|
||||
- variable: image.longhorn.shareManager.tag
|
||||
default: v1_20211020
|
||||
default: v1_20220914
|
||||
description: "Specify Longhorn Share Manager Image Tag"
|
||||
type: string
|
||||
label: Longhorn Share Manager Image Tag
|
||||
@ -77,11 +77,23 @@ questions:
|
||||
label: Longhorn Backing Image Manager Image Repository
|
||||
group: "Longhorn Images Settings"
|
||||
- variable: image.longhorn.backingImageManager.tag
|
||||
default: v2_20210820
|
||||
default: v3_20220808
|
||||
description: "Specify Longhorn Backing Image Manager Image Tag"
|
||||
type: string
|
||||
label: Longhorn Backing Image Manager Image Tag
|
||||
group: "Longhorn Images Settings"
|
||||
- variable: image.longhorn.supportBundleKit.repository
|
||||
default: longhornio/support-bundle-kit
|
||||
description: "Specify Longhorn Support Bundle Manager Image Repository"
|
||||
type: string
|
||||
label: Longhorn Support Bundle Kit Image Repository
|
||||
group: "Longhorn Images Settings"
|
||||
- variable: image.longhorn.supportBundleKit.tag
|
||||
default: v0.0.27
|
||||
description: "Specify Longhorn Support Bundle Manager Image Tag"
|
||||
type: string
|
||||
label: Longhorn Support Bundle Kit Image Tag
|
||||
group: "Longhorn Images Settings"
|
||||
- variable: image.csi.attacher.repository
|
||||
default: longhornio/csi-attacher
|
||||
description: "Specify CSI attacher image repository. Leave blank to autodetect."
|
||||
@ -89,7 +101,7 @@ questions:
|
||||
label: Longhorn CSI Attacher Image Repository
|
||||
group: "Longhorn CSI Driver Images"
|
||||
- variable: image.csi.attacher.tag
|
||||
default: v3.2.1
|
||||
default: v4.2.0
|
||||
description: "Specify CSI attacher image tag. Leave blank to autodetect."
|
||||
type: string
|
||||
label: Longhorn CSI Attacher Image Tag
|
||||
@ -101,7 +113,7 @@ questions:
|
||||
label: Longhorn CSI Provisioner Image Repository
|
||||
group: "Longhorn CSI Driver Images"
|
||||
- variable: image.csi.provisioner.tag
|
||||
default: v2.1.2
|
||||
default: v3.4.1
|
||||
description: "Specify CSI provisioner image tag. Leave blank to autodetect."
|
||||
type: string
|
||||
label: Longhorn CSI Provisioner Image Tag
|
||||
@ -113,7 +125,7 @@ questions:
|
||||
label: Longhorn CSI Node Driver Registrar Image Repository
|
||||
group: "Longhorn CSI Driver Images"
|
||||
- variable: image.csi.nodeDriverRegistrar.tag
|
||||
default: v2.3.0
|
||||
default: v2.7.0
|
||||
description: "Specify CSI Node Driver Registrar image tag. Leave blank to autodetect."
|
||||
type: string
|
||||
label: Longhorn CSI Node Driver Registrar Image Tag
|
||||
@ -125,7 +137,7 @@ questions:
|
||||
label: Longhorn CSI Driver Resizer Image Repository
|
||||
group: "Longhorn CSI Driver Images"
|
||||
- variable: image.csi.resizer.tag
|
||||
default: v1.2.0
|
||||
default: v1.7.0
|
||||
description: "Specify CSI Driver Resizer image tag. Leave blank to autodetect."
|
||||
type: string
|
||||
label: Longhorn CSI Driver Resizer Image Tag
|
||||
@ -137,35 +149,53 @@ questions:
|
||||
label: Longhorn CSI Driver Snapshotter Image Repository
|
||||
group: "Longhorn CSI Driver Images"
|
||||
- variable: image.csi.snapshotter.tag
|
||||
default: v3.0.3
|
||||
default: v6.2.1
|
||||
description: "Specify CSI Driver Snapshotter image tag. Leave blank to autodetect."
|
||||
type: string
|
||||
label: Longhorn CSI Driver Snapshotter Image Tag
|
||||
group: "Longhorn CSI Driver Images"
|
||||
- variable: image.csi.livenessProbe.repository
|
||||
default: longhornio/livenessprobe
|
||||
description: "Specify CSI liveness probe image repository. Leave blank to autodetect."
|
||||
type: string
|
||||
label: Longhorn CSI Liveness Probe Image Repository
|
||||
group: "Longhorn CSI Driver Images"
|
||||
- variable: image.csi.livenessProbe.tag
|
||||
default: v2.9.0
|
||||
description: "Specify CSI liveness probe image tag. Leave blank to autodetect."
|
||||
type: string
|
||||
label: Longhorn CSI Liveness Probe Image Tag
|
||||
group: "Longhorn CSI Driver Images"
|
||||
- variable: privateRegistry.registryUrl
|
||||
label: Private registry URL
|
||||
description: "URL of private registry. Leave blank to apply system default registry."
|
||||
group: "Private Registry Settings"
|
||||
type: string
|
||||
default: ""
|
||||
- variable: privateRegistry.registryUser
|
||||
label: Private registry user
|
||||
description: "User used to authenticate to private registry"
|
||||
group: "Private Registry Settings"
|
||||
type: string
|
||||
default: ""
|
||||
- variable: privateRegistry.registryPasswd
|
||||
label: Private registry password
|
||||
description: "Password used to authenticate to private registry"
|
||||
group: "Private Registry Settings"
|
||||
type: password
|
||||
default: ""
|
||||
- variable: privateRegistry.registrySecret
|
||||
label: Private registry secret name
|
||||
description: "Longhorn will automatically generate a Kubernetes secret with this name and use it to pull images from your private registry."
|
||||
description: "If create a new private registry secret is true, create a Kubernetes secret with this name; else use the existing secret of this name. Use it to pull images from your private registry."
|
||||
group: "Private Registry Settings"
|
||||
type: string
|
||||
default: ""
|
||||
- variable: privateRegistry.createSecret
|
||||
default: "true"
|
||||
description: "Create a new private registry secret"
|
||||
type: boolean
|
||||
group: "Private Registry Settings"
|
||||
label: Create Secret for Private Registry Settings
|
||||
show_subquestion_if: true
|
||||
subquestions:
|
||||
- variable: privateRegistry.registryUser
|
||||
label: Private registry user
|
||||
description: "User used to authenticate to private registry."
|
||||
type: string
|
||||
default: ""
|
||||
- variable: privateRegistry.registryPasswd
|
||||
label: Private registry password
|
||||
description: "Password used to authenticate to private registry."
|
||||
type: password
|
||||
default: ""
|
||||
- variable: longhorn.default_setting
|
||||
default: "false"
|
||||
description: "Customize the default settings before installing Longhorn for the first time. This option will only work if the cluster hasn't installed Longhorn."
|
||||
@ -214,7 +244,7 @@ questions:
|
||||
group: "Longhorn CSI Driver Settings"
|
||||
- variable: defaultSettings.backupTarget
|
||||
label: Backup Target
|
||||
description: "The endpoint used to access the backupstore. NFS and S3 are supported."
|
||||
description: "The endpoint used to access the backupstore. Available: NFS, CIFS, AWS, GCP, AZURE"
|
||||
group: "Longhorn Default Settings"
|
||||
type: string
|
||||
default:
|
||||
@ -226,8 +256,7 @@ questions:
|
||||
default:
|
||||
- variable: defaultSettings.allowRecurringJobWhileVolumeDetached
|
||||
label: Allow Recurring Job While Volume Is Detached
|
||||
description: 'If this setting is enabled, Longhorn will automatically attaches the volume and takes snapshot/backup when it is the time to do recurring snapshot/backup.
|
||||
Note that the volume is not ready for workload during the period when the volume was automatically attached. Workload will have to wait until the recurring job finishes.'
|
||||
description: 'If this setting is enabled, Longhorn will automatically attaches the volume and takes snapshot/backup when it is the time to do recurring snapshot/backup.'
|
||||
group: "Longhorn Default Settings"
|
||||
type: boolean
|
||||
default: "false"
|
||||
@ -245,11 +274,7 @@ Note that the volume is not ready for workload during the period when the volume
|
||||
default: "/var/lib/longhorn/"
|
||||
- variable: defaultSettings.defaultDataLocality
|
||||
label: Default Data Locality
|
||||
description: 'We say a Longhorn volume has data locality if there is a local replica of the volume on the same node as the pod which is using the volume.
|
||||
This setting specifies the default data locality when a volume is created from the Longhorn UI. For Kubernetes configuration, update the `dataLocality` in the StorageClass
|
||||
The available modes are:
|
||||
- **disabled**. This is the default option. There may or may not be a replica on the same node as the attached volume (workload)
|
||||
- **best-effort**. This option instructs Longhorn to try to keep a replica on the same node as the attached volume (workload). Longhorn will not stop the volume, even if it cannot keep a replica local to the attached volume (workload) due to environment limitation, e.g. not enough disk space, incompatible disk tags, etc.'
|
||||
description: 'Longhorn volume has data locality if there is a local replica of the volume on the same node as the pod which is using the volume.'
|
||||
group: "Longhorn Default Settings"
|
||||
type: enum
|
||||
options:
|
||||
@ -264,17 +289,7 @@ The available modes are:
|
||||
default: "false"
|
||||
- variable: defaultSettings.replicaAutoBalance
|
||||
label: Replica Auto Balance
|
||||
description: 'Enable this setting automatically rebalances replicas when discovered an available node.
|
||||
The available global options are:
|
||||
- **disabled**. This is the default option. No replica auto-balance will be done.
|
||||
- **least-effort**. This option instructs Longhorn to balance replicas for minimal redundancy.
|
||||
- **best-effort**. This option instructs Longhorn to balance replicas for even redundancy.
|
||||
Longhorn also support individual volume setting. The setting can be specified in volume.spec.replicaAutoBalance, this overrules the global setting.
|
||||
The available volume spec options are:
|
||||
- **ignored**. This is the default option that instructs Longhorn to inherit from the global setting.
|
||||
- **disabled**. This option instructs Longhorn no replica auto-balance should be done.
|
||||
- **least-effort**. This option instructs Longhorn to balance replicas for minimal redundancy.
|
||||
- **best-effort**. This option instructs Longhorn to balance replicas for even redundancy.'
|
||||
description: 'Enable this setting automatically rebalances replicas when discovered an available node.'
|
||||
group: "Longhorn Default Settings"
|
||||
type: enum
|
||||
options:
|
||||
@ -297,6 +312,14 @@ The available volume spec options are:
|
||||
min: 0
|
||||
max: 100
|
||||
default: 25
|
||||
- variable: defaultSettings.storageReservedPercentageForDefaultDisk
|
||||
label: Storage Reserved Percentage For Default Disk
|
||||
description: "The reserved percentage specifies the percentage of disk space that will not be allocated to the default disk on each new Longhorn node."
|
||||
group: "Longhorn Default Settings"
|
||||
type: int
|
||||
min: 0
|
||||
max: 100
|
||||
default: 30
|
||||
- variable: defaultSettings.upgradeChecker
|
||||
label: Enable Upgrade Checker
|
||||
description: 'Upgrade Checker will check for new Longhorn version periodically. When there is a new version available, a notification will appear in the UI. By default true.'
|
||||
@ -324,6 +347,40 @@ The available volume spec options are:
|
||||
type: int
|
||||
min: 0
|
||||
default: 300
|
||||
- variable: defaultSettings.failedBackupTTL
|
||||
label: Failed Backup Time to Live
|
||||
description: "In minutes. This setting determines how long Longhorn will keep the backup resource that was failed. Set to 0 to disable the auto-deletion."
|
||||
group: "Longhorn Default Settings"
|
||||
type: int
|
||||
min: 0
|
||||
default: 1440
|
||||
- variable: defaultSettings.restoreVolumeRecurringJobs
|
||||
label: Restore Volume Recurring Jobs
|
||||
description: "Restore recurring jobs from the backup volume on the backup target and create recurring jobs if not exist during a backup restoration."
|
||||
group: "Longhorn Default Settings"
|
||||
type: boolean
|
||||
default: "false"
|
||||
- variable: defaultSettings.recurringSuccessfulJobsHistoryLimit
|
||||
label: Cronjob Successful Jobs History Limit
|
||||
description: "This setting specifies how many successful backup or snapshot job histories should be retained. History will not be retained if the value is 0."
|
||||
group: "Longhorn Default Settings"
|
||||
type: int
|
||||
min: 0
|
||||
default: 1
|
||||
- variable: defaultSettings.recurringFailedJobsHistoryLimit
|
||||
label: Cronjob Failed Jobs History Limit
|
||||
description: "This setting specifies how many failed backup or snapshot job histories should be retained. History will not be retained if the value is 0."
|
||||
group: "Longhorn Default Settings"
|
||||
type: int
|
||||
min: 0
|
||||
default: 1
|
||||
- variable: defaultSettings.supportBundleFailedHistoryLimit
|
||||
label: SupportBundle Failed History Limit
|
||||
description: "This setting specifies how many failed support bundles can exist in the cluster. Set this value to **0** to have Longhorn automatically purge all failed support bundles."
|
||||
group: "Longhorn Default Settings"
|
||||
type: int
|
||||
min: 0
|
||||
default: 1
|
||||
- variable: defaultSettings.autoSalvage
|
||||
label: Automatic salvage
|
||||
description: "If enabled, volumes will be automatically salvaged when all the replicas become faulty e.g. due to network disconnection. Longhorn will try to figure out which replica(s) are usable, then use them for the volume. By default true."
|
||||
@ -332,9 +389,7 @@ The available volume spec options are:
|
||||
default: "true"
|
||||
- variable: defaultSettings.autoDeletePodWhenVolumeDetachedUnexpectedly
|
||||
label: Automatically Delete Workload Pod when The Volume Is Detached Unexpectedly
|
||||
description: 'If enabled, Longhorn will automatically delete the workload pod that is managed by a controller (e.g. deployment, statefulset, daemonset, etc...) when Longhorn volume is detached unexpectedly (e.g. during Kubernetes upgrade, Docker reboot, or network disconnect). By deleting the pod, its controller restarts the pod and Kubernetes handles volume reattachment and remount.
|
||||
If disabled, Longhorn will not delete the workload pod that is managed by a controller. You will have to manually restart the pod to reattach and remount the volume.
|
||||
**Note:** This setting does not apply to the workload pods that do not have a controller. Longhorn never deletes them.'
|
||||
description: 'If enabled, Longhorn will automatically delete the workload pod that is managed by a controller (e.g. deployment, statefulset, daemonset, etc...) when Longhorn volume is detached unexpectedly (e.g. during Kubernetes upgrade, Docker reboot, or network disconnect). By deleting the pod, its controller restarts the pod and Kubernetes handles volume reattachment and remount.'
|
||||
group: "Longhorn Default Settings"
|
||||
type: boolean
|
||||
default: "true"
|
||||
@ -350,13 +405,27 @@ If disabled, Longhorn will not delete the workload pod that is managed by a cont
|
||||
group: "Longhorn Default Settings"
|
||||
type: boolean
|
||||
default: "true"
|
||||
- variable: defaultSettings.replicaDiskSoftAntiAffinity
|
||||
label: Replica Disk Level Soft Anti-Affinity
|
||||
description: 'Allow scheduling on disks with existing healthy replicas of the same volume. By default true.'
|
||||
group: "Longhorn Default Settings"
|
||||
type: boolean
|
||||
default: "true"
|
||||
- variable: defaultSettings.allowEmptyNodeSelectorVolume
|
||||
label: Allow Empty Node Selector Volume
|
||||
description: "Allow Scheduling Empty Node Selector Volumes To Any Node"
|
||||
group: "Longhorn Default Settings"
|
||||
type: boolean
|
||||
default: "true"
|
||||
- variable: defaultSettings.allowEmptyDiskSelectorVolume
|
||||
label: Allow Empty Disk Selector Volume
|
||||
description: "Allow Scheduling Empty Disk Selector Volumes To Any Disk"
|
||||
group: "Longhorn Default Settings"
|
||||
type: boolean
|
||||
default: "true"
|
||||
- variable: defaultSettings.nodeDownPodDeletionPolicy
|
||||
label: Pod Deletion Policy When Node is Down
|
||||
description: "Defines the Longhorn action when a Volume is stuck with a StatefulSet/Deployment Pod on a node that is down.
|
||||
- **do-nothing** is the default Kubernetes behavior of never force deleting StatefulSet/Deployment terminating pods. Since the pod on the node that is down isn't removed, Longhorn volumes are stuck on nodes that are down.
|
||||
- **delete-statefulset-pod** Longhorn will force delete StatefulSet terminating pods on nodes that are down to release Longhorn volumes so that Kubernetes can spin up replacement pods.
|
||||
- **delete-deployment-pod** Longhorn will force delete Deployment terminating pods on nodes that are down to release Longhorn volumes so that Kubernetes can spin up replacement pods.
|
||||
- **delete-both-statefulset-and-deployment-pod** Longhorn will force delete StatefulSet/Deployment terminating pods on nodes that are down to release Longhorn volumes so that Kubernetes can spin up replacement pods."
|
||||
description: "Defines the Longhorn action when a Volume is stuck with a StatefulSet/Deployment Pod on a node that is down."
|
||||
group: "Longhorn Default Settings"
|
||||
type: enum
|
||||
options:
|
||||
@ -365,47 +434,40 @@ If disabled, Longhorn will not delete the workload pod that is managed by a cont
|
||||
- "delete-deployment-pod"
|
||||
- "delete-both-statefulset-and-deployment-pod"
|
||||
default: "do-nothing"
|
||||
- variable: defaultSettings.allowNodeDrainWithLastHealthyReplica
|
||||
label: Allow Node Drain with the Last Healthy Replica
|
||||
description: "By default, Longhorn will block `kubectl drain` action on a node if the node contains the last healthy replica of a volume.
|
||||
If this setting is enabled, Longhorn will **not** block `kubectl drain` action on a node even if the node contains the last healthy replica of a volume."
|
||||
- variable: defaultSettings.nodeDrainPolicy
|
||||
label: Node Drain Policy
|
||||
description: "Define the policy to use when a node with the last healthy replica of a volume is drained."
|
||||
group: "Longhorn Default Settings"
|
||||
type: boolean
|
||||
default: "false"
|
||||
- variable: defaultSettings.mkfsExt4Parameters
|
||||
label: Custom mkfs.ext4 parameters
|
||||
description: "Allows setting additional filesystem creation parameters for ext4. For older host kernels it might be necessary to disable the optional ext4 metadata_csum feature by specifying `-O ^64bit,^metadata_csum`."
|
||||
group: "Longhorn Default Settings"
|
||||
type: string
|
||||
- variable: defaultSettings.disableReplicaRebuild
|
||||
label: Disable Replica Rebuild
|
||||
description: "This setting disable replica rebuild cross the whole cluster, eviction and data locality feature won't work if this setting is true. But doesn't have any impact to any current replica rebuild and restore disaster recovery volume."
|
||||
group: "Longhorn Default Settings"
|
||||
type: boolean
|
||||
default: "false"
|
||||
type: enum
|
||||
options:
|
||||
- "block-if-contains-last-replica"
|
||||
- "allow-if-replica-is-stopped"
|
||||
- "always-allow"
|
||||
default: "block-if-contains-last-replica"
|
||||
- variable: defaultSettings.replicaReplenishmentWaitInterval
|
||||
label: Replica Replenishment Wait Interval
|
||||
description: "In seconds. The interval determines how long Longhorn will wait at least in order to reuse the existing data on a failed replica rather than directly creating a new replica for a degraded volume.
|
||||
Warning: This option works only when there is a failed replica in the volume. And this option may block the rebuilding for a while in the case."
|
||||
description: "In seconds. The interval determines how long Longhorn will wait at least in order to reuse the existing data on a failed replica rather than directly creating a new replica for a degraded volume."
|
||||
group: "Longhorn Default Settings"
|
||||
type: int
|
||||
min: 0
|
||||
default: 600
|
||||
- variable: defaultSettings.concurrentReplicaRebuildPerNodeLimit
|
||||
label: Concurrent Replica Rebuild Per Node Limit
|
||||
description: "This setting controls how many replicas on a node can be rebuilt simultaneously.
|
||||
Typically, Longhorn can block the replica starting once the current rebuilding count on a node exceeds the limit. But when the value is 0, it means disabling the replica rebuilding.
|
||||
WARNING:
|
||||
- The old setting \"Disable Replica Rebuild\" is replaced by this setting.
|
||||
- Different from relying on replica starting delay to limit the concurrent rebuilding, if the rebuilding is disabled, replica object replenishment will be directly skipped.
|
||||
- When the value is 0, the eviction and data locality feature won't work. But this shouldn't have any impact to any current replica rebuild and backup restore."
|
||||
description: "This setting controls how many replicas on a node can be rebuilt simultaneously."
|
||||
group: "Longhorn Default Settings"
|
||||
type: int
|
||||
min: 0
|
||||
default: 5
|
||||
- variable: defaultSettings.concurrentVolumeBackupRestorePerNodeLimit
|
||||
label: Concurrent Volume Backup Restore Per Node Limit
|
||||
description: "This setting controls how many volumes on a node can restore the backup concurrently. Set the value to **0** to disable backup restore."
|
||||
group: "Longhorn Default Settings"
|
||||
type: int
|
||||
min: 0
|
||||
default: 5
|
||||
- variable: defaultSettings.disableRevisionCounter
|
||||
label: Disable Revision Counter
|
||||
description: "This setting is only for volumes created by UI. By default, this is false meaning there will be a reivision counter file to track every write to the volume. During salvage recovering Longhorn will pick the repica with largest reivision counter as candidate to recover the whole volume. If revision counter is disabled, Longhorn will not track every write to the volume. During the salvage recovering, Longhorn will use the 'volume-head-xxx.img' file last modification time and file size to pick the replica candidate to recover the whole volume."
|
||||
description: "This setting is only for volumes created by UI. By default, this is false meaning there will be a reivision counter file to track every write to the volume. During salvage recovering Longhorn will pick the replica with largest reivision counter as candidate to recover the whole volume. If revision counter is disabled, Longhorn will not track every write to the volume. During the salvage recovering, Longhorn will use the 'volume-head-xxx.img' file last modification time and file size to pick the replica candidate to recover the whole volume."
|
||||
group: "Longhorn Default Settings"
|
||||
type: boolean
|
||||
default: "false"
|
||||
@ -447,50 +509,127 @@ WARNING:
|
||||
default: 60
|
||||
- variable: defaultSettings.backingImageRecoveryWaitInterval
|
||||
label: Backing Image Recovery Wait Interval
|
||||
description: "This interval in seconds determines how long Longhorn will wait before re-downloading the backing image file when all disk files of this backing image become failed or unknown.
|
||||
WARNING:
|
||||
- This recovery only works for the backing image of which the creation type is \"download\".
|
||||
- File state \"unknown\" means the related manager pods on the pod is not running or the node itself is down/disconnected."
|
||||
description: "This interval in seconds determines how long Longhorn will wait before re-downloading the backing image file when all disk files of this backing image become failed or unknown."
|
||||
group: "Longhorn Default Settings"
|
||||
type: int
|
||||
min: 0
|
||||
default: 300
|
||||
- variable: defaultSettings.guaranteedEngineManagerCPU
|
||||
label: Guaranteed Engine Manager CPU
|
||||
description: "This integer value indicates how many percentage of the total allocatable CPU on each node will be reserved for each engine manager Pod. For example, 10 means 10% of the total CPU on a node will be allocated to each engine manager pod on this node. This will help maintain engine stability during high node workload.
|
||||
In order to prevent unexpected volume engine crash as well as guarantee a relative acceptable IO performance, you can use the following formula to calculate a value for this setting:
|
||||
Guaranteed Engine Manager CPU = The estimated max Longhorn volume engine count on a node * 0.1 / The total allocatable CPUs on the node * 100.
|
||||
The result of above calculation doesn't mean that's the maximum CPU resources the Longhorn workloads require. To fully exploit the Longhorn volume I/O performance, you can allocate/guarantee more CPU resources via this setting.
|
||||
If it's hard to estimate the usage now, you can leave it with the default value, which is 12%. Then you can tune it when there is no running workload using Longhorn volumes.
|
||||
WARNING:
|
||||
- Value 0 means unsetting CPU requests for engine manager pods.
|
||||
- Considering the possible new instance manager pods in the further system upgrade, this integer value is range from 0 to 40. And the sum with setting 'Guaranteed Engine Manager CPU' should not be greater than 40.
|
||||
- One more set of instance manager pods may need to be deployed when the Longhorn system is upgraded. If current available CPUs of the nodes are not enough for the new instance manager pods, you need to detach the volumes using the oldest instance manager pods so that Longhorn can clean up the old pods automatically and release the CPU resources. And the new pods with the latest instance manager image will be launched then.
|
||||
- This global setting will be ignored for a node if the field \"EngineManagerCPURequest\" on the node is set.
|
||||
- After this setting is changed, all engine manager pods using this global setting on all the nodes will be automatically restarted. In other words, DO NOT CHANGE THIS SETTING WITH ATTACHED VOLUMES."
|
||||
- variable: defaultSettings.guaranteedInstanceManagerCPU
|
||||
label: Guaranteed Instance Manager CPU
|
||||
description: "This integer value indicates how many percentage of the total allocatable CPU on each node will be reserved for each instance manager Pod. You can leave it with the default value, which is 12%."
|
||||
group: "Longhorn Default Settings"
|
||||
type: int
|
||||
min: 0
|
||||
max: 40
|
||||
default: 12
|
||||
- variable: defaultSettings.guaranteedReplicaManagerCPU
|
||||
label: Guaranteed Replica Manager CPU
|
||||
description: "This integer value indicates how many percentage of the total allocatable CPU on each node will be reserved for each replica manager Pod. 10 means 10% of the total CPU on a node will be allocated to each replica manager pod on this node. This will help maintain replica stability during high node workload.
|
||||
In order to prevent unexpected volume replica crash as well as guarantee a relative acceptable IO performance, you can use the following formula to calculate a value for this setting:
|
||||
Guaranteed Replica Manager CPU = The estimated max Longhorn volume replica count on a node * 0.1 / The total allocatable CPUs on the node * 100.
|
||||
The result of above calculation doesn't mean that's the maximum CPU resources the Longhorn workloads require. To fully exploit the Longhorn volume I/O performance, you can allocate/guarantee more CPU resources via this setting.
|
||||
If it's hard to estimate the usage now, you can leave it with the default value, which is 12%. Then you can tune it when there is no running workload using Longhorn volumes.
|
||||
WARNING:
|
||||
- Value 0 means unsetting CPU requests for replica manager pods.
|
||||
- Considering the possible new instance manager pods in the further system upgrade, this integer value is range from 0 to 40. And the sum with setting 'Guaranteed Replica Manager CPU' should not be greater than 40.
|
||||
- One more set of instance manager pods may need to be deployed when the Longhorn system is upgraded. If current available CPUs of the nodes are not enough for the new instance manager pods, you need to detach the volumes using the oldest instance manager pods so that Longhorn can clean up the old pods automatically and release the CPU resources. And the new pods with the latest instance manager image will be launched then.
|
||||
- This global setting will be ignored for a node if the field \"ReplicaManagerCPURequest\" on the node is set.
|
||||
- After this setting is changed, all replica manager pods using this global setting on all the nodes will be automatically restarted. In other words, DO NOT CHANGE THIS SETTING WITH ATTACHED VOLUMES."
|
||||
- variable: defaultSettings.logLevel
|
||||
label: Log Level
|
||||
description: "The log level Panic, Fatal, Error, Warn, Info, Debug, Trace used in longhorn manager. Default to Info."
|
||||
group: "Longhorn Default Settings"
|
||||
type: string
|
||||
default: "Info"
|
||||
- variable: defaultSettings.kubernetesClusterAutoscalerEnabled
|
||||
label: Kubernetes Cluster Autoscaler Enabled (Experimental)
|
||||
description: "Enabling this setting will notify Longhorn that the cluster is using Kubernetes Cluster Autoscaler."
|
||||
group: "Longhorn Default Settings"
|
||||
type: boolean
|
||||
default: false
|
||||
- variable: defaultSettings.orphanAutoDeletion
|
||||
label: Orphaned Data Cleanup
|
||||
description: "This setting allows Longhorn to delete the orphan resource and its corresponding orphaned data automatically like stale replicas. Orphan resources on down or unknown nodes will not be cleaned up automatically."
|
||||
group: "Longhorn Default Settings"
|
||||
type: boolean
|
||||
default: false
|
||||
- variable: defaultSettings.storageNetwork
|
||||
label: Storage Network
|
||||
description: "Longhorn uses the storage network for in-cluster data traffic. Leave this blank to use the Kubernetes cluster network."
|
||||
group: "Longhorn Default Settings"
|
||||
type: string
|
||||
default:
|
||||
- variable: defaultSettings.deletingConfirmationFlag
|
||||
label: Deleting Confirmation Flag
|
||||
description: "This flag is designed to prevent Longhorn from being accidentally uninstalled which will lead to data lost."
|
||||
group: "Longhorn Default Settings"
|
||||
type: boolean
|
||||
default: "false"
|
||||
- variable: defaultSettings.engineReplicaTimeout
|
||||
label: Timeout between Engine and Replica
|
||||
description: "In seconds. The setting specifies the timeout between the engine and replica(s), and the value should be between 8 to 30 seconds. The default value is 8 seconds."
|
||||
group: "Longhorn Default Settings"
|
||||
type: int
|
||||
min: 0
|
||||
max: 40
|
||||
default: 12
|
||||
default: "8"
|
||||
- variable: defaultSettings.snapshotDataIntegrity
|
||||
label: Snapshot Data Integrity
|
||||
description: "This setting allows users to enable or disable snapshot hashing and data integrity checking."
|
||||
group: "Longhorn Default Settings"
|
||||
type: string
|
||||
default: "disabled"
|
||||
- variable: defaultSettings.snapshotDataIntegrityImmediateCheckAfterSnapshotCreation
|
||||
label: Immediate Snapshot Data Integrity Check After Creating a Snapshot
|
||||
description: "Hashing snapshot disk files impacts the performance of the system. The immediate snapshot hashing and checking can be disabled to minimize the impact after creating a snapshot."
|
||||
group: "Longhorn Default Settings"
|
||||
type: boolean
|
||||
default: "false"
|
||||
- variable: defaultSettings.snapshotDataIntegrityCronjob
|
||||
label: Snapshot Data Integrity Check CronJob
|
||||
description: "Unix-cron string format. The setting specifies when Longhorn checks the data integrity of snapshot disk files."
|
||||
group: "Longhorn Default Settings"
|
||||
type: string
|
||||
default: "0 0 */7 * *"
|
||||
- variable: defaultSettings.removeSnapshotsDuringFilesystemTrim
|
||||
label: Remove Snapshots During Filesystem Trim
|
||||
description: "This setting allows Longhorn filesystem trim feature to automatically mark the latest snapshot and its ancestors as removed and stops at the snapshot containing multiple children."
|
||||
group: "Longhorn Default Settings"
|
||||
type: boolean
|
||||
default: "false"
|
||||
- variable: defaultSettings.fastReplicaRebuildEnabled
|
||||
label: Fast Replica Rebuild Enabled
|
||||
description: "This feature supports the fast replica rebuilding. It relies on the checksum of snapshot disk files, so setting the snapshot-data-integrity to **enable** or **fast-check** is a prerequisite."
|
||||
group: "Longhorn Default Settings"
|
||||
type: boolean
|
||||
default: false
|
||||
- variable: defaultSettings.replicaFileSyncHttpClientTimeout
|
||||
label: Timeout of HTTP Client to Replica File Sync Server
|
||||
description: "In seconds. The setting specifies the HTTP client timeout to the file sync server."
|
||||
group: "Longhorn Default Settings"
|
||||
type: int
|
||||
default: "30"
|
||||
- variable: defaultSettings.backupCompressionMethod
|
||||
label: Backup Compression Method
|
||||
description: "This setting allows users to specify backup compression method."
|
||||
group: "Longhorn Default Settings"
|
||||
type: string
|
||||
default: "lz4"
|
||||
- variable: defaultSettings.backupConcurrentLimit
|
||||
label: Backup Concurrent Limit Per Backup
|
||||
description: "This setting controls how many worker threads per backup concurrently."
|
||||
group: "Longhorn Default Settings"
|
||||
type: int
|
||||
min: 1
|
||||
default: 2
|
||||
- variable: defaultSettings.restoreConcurrentLimit
|
||||
label: Restore Concurrent Limit Per Backup
|
||||
description: "This setting controls how many worker threads per restore concurrently."
|
||||
group: "Longhorn Default Settings"
|
||||
type: int
|
||||
min: 1
|
||||
default: 2
|
||||
- variable: defaultSettings.v2DataEngine
|
||||
label: V2 Data Engine
|
||||
description: "This allows users to activate v2 data engine based on SPDK. Currently, it is in the preview phase and should not be utilized in a production environment."
|
||||
group: "Longhorn V2 Data Engine (Preview Feature) Settings"
|
||||
type: boolean
|
||||
default: false
|
||||
- variable: defaultSettings.offlineReplicaRebuilding
|
||||
label: Offline Replica Rebuilding
|
||||
description: "This setting allows users to enable the offline replica rebuilding for volumes using v2 data engine."
|
||||
group: "Longhorn V2 Data Engine (Preview Feature) Settings"
|
||||
required: true
|
||||
type: enum
|
||||
options:
|
||||
- "enabled"
|
||||
- "disabled"
|
||||
default: "enabled"
|
||||
- variable: persistence.defaultClass
|
||||
default: "true"
|
||||
description: "Set as default StorageClass for Longhorn"
|
||||
@ -500,7 +639,7 @@ WARNING:
|
||||
type: boolean
|
||||
- variable: persistence.reclaimPolicy
|
||||
label: Storage Class Retain Policy
|
||||
description: "Define reclaim policy (Retain or Delete)"
|
||||
description: "Define reclaim policy. Options: `Retain`, `Delete`"
|
||||
group: "Longhorn Storage Class Settings"
|
||||
required: true
|
||||
type: enum
|
||||
@ -516,6 +655,15 @@ WARNING:
|
||||
min: 1
|
||||
max: 10
|
||||
default: 3
|
||||
- variable: persistence.defaultDataLocality
|
||||
description: "Set data locality for Longhorn StorageClass. Options: `disabled`, `best-effort`"
|
||||
label: Default Storage Class Data Locality
|
||||
group: "Longhorn Storage Class Settings"
|
||||
type: enum
|
||||
options:
|
||||
- "disabled"
|
||||
- "best-effort"
|
||||
default: "disabled"
|
||||
- variable: persistence.recurringJobSelector.enable
|
||||
description: "Enable recurring job selector for Longhorn StorageClass"
|
||||
group: "Longhorn Storage Class Settings"
|
||||
@ -530,6 +678,20 @@ WARNING:
|
||||
group: "Longhorn Storage Class Settings"
|
||||
type: string
|
||||
default:
|
||||
- variable: persistence.defaultNodeSelector.enable
|
||||
description: "Enable Node selector for Longhorn StorageClass"
|
||||
group: "Longhorn Storage Class Settings"
|
||||
label: Enable Storage Class Node Selector
|
||||
type: boolean
|
||||
default: false
|
||||
show_subquestion_if: true
|
||||
subquestions:
|
||||
- variable: persistence.defaultNodeSelector.selector
|
||||
label: Storage Class Node Selector
|
||||
description: 'This selector enables only certain nodes having these tags to be used for the volume. e.g. `"storage,fast"`'
|
||||
group: "Longhorn Storage Class Settings"
|
||||
type: string
|
||||
default:
|
||||
- variable: persistence.backingImage.enable
|
||||
description: "Set backing image for Longhorn StorageClass"
|
||||
group: "Longhorn Storage Class Settings"
|
||||
@ -579,6 +741,16 @@ WARNING:
|
||||
group: "Longhorn Storage Class Settings"
|
||||
type: string
|
||||
default:
|
||||
- variable: persistence.removeSnapshotsDuringFilesystemTrim
|
||||
description: "Allow automatically removing snapshots during filesystem trim for Longhorn StorageClass. Options: `ignored`, `enabled`, `disabled`"
|
||||
label: Default Storage Class Remove Snapshots During Filesystem Trim
|
||||
group: "Longhorn Storage Class Settings"
|
||||
type: enum
|
||||
options:
|
||||
- "ignored"
|
||||
- "enabled"
|
||||
- "disabled"
|
||||
default: "ignored"
|
||||
- variable: ingress.enabled
|
||||
default: "false"
|
||||
description: "Expose app using Layer 7 Load Balancer - ingress"
|
||||
@ -593,9 +765,15 @@ WARNING:
|
||||
type: hostname
|
||||
required: true
|
||||
label: Layer 7 Load Balancer Hostname
|
||||
- variable: ingress.path
|
||||
default: "/"
|
||||
description: "If ingress is enabled you can set the default ingress path"
|
||||
type: string
|
||||
required: true
|
||||
label: Ingress Path
|
||||
- variable: service.ui.type
|
||||
default: "Rancher-Proxy"
|
||||
description: "Define Longhorn UI service type"
|
||||
description: "Define Longhorn UI service type. Options: `ClusterIP`, `NodePort`, `LoadBalancer`, `Rancher-Proxy`"
|
||||
type: enum
|
||||
options:
|
||||
- "ClusterIP"
|
||||
@ -616,8 +794,32 @@ WARNING:
|
||||
show_if: "service.ui.type=NodePort||service.ui.type=LoadBalancer"
|
||||
label: UI Service NodePort number
|
||||
- variable: enablePSP
|
||||
default: "true"
|
||||
default: "false"
|
||||
description: "Setup a pod security policy for Longhorn workloads."
|
||||
label: Pod Security Policy
|
||||
type: boolean
|
||||
group: "Other Settings"
|
||||
- variable: global.cattle.windowsCluster.enabled
|
||||
default: "false"
|
||||
description: "Enable this to allow Longhorn to run on the Rancher deployed Windows cluster."
|
||||
label: Rancher Windows Cluster
|
||||
type: boolean
|
||||
group: "Other Settings"
|
||||
- variable: networkPolicies.enabled
|
||||
description: "Enable NetworkPolicies to limit access to the longhorn pods.
|
||||
Warning: The Rancher Proxy will not work if this feature is enabled and a custom NetworkPolicy must be added."
|
||||
group: "Other Settings"
|
||||
label: Network Policies
|
||||
default: "false"
|
||||
type: boolean
|
||||
subquestions:
|
||||
- variable: networkPolicies.type
|
||||
label: Network Policies for Ingress
|
||||
description: "Create the policy based on your distribution to allow access for the ingress. Options: `k3s`, `rke2`, `rke1`"
|
||||
show_if: "networkPolicies.enabled=true&&ingress.enabled=true"
|
||||
type: enum
|
||||
default: "rke2"
|
||||
options:
|
||||
- "rke1"
|
||||
- "rke2"
|
||||
- "k3s"
|
@ -11,7 +11,7 @@ rules:
|
||||
verbs:
|
||||
- "*"
|
||||
- apiGroups: [""]
|
||||
resources: ["pods", "events", "persistentvolumes", "persistentvolumeclaims","persistentvolumeclaims/status", "nodes", "proxy/nodes", "pods/log", "secrets", "services", "endpoints", "configmaps"]
|
||||
resources: ["pods", "events", "persistentvolumes", "persistentvolumeclaims","persistentvolumeclaims/status", "nodes", "proxy/nodes", "pods/log", "secrets", "services", "endpoints", "configmaps", "serviceaccounts"]
|
||||
verbs: ["*"]
|
||||
- apiGroups: [""]
|
||||
resources: ["namespaces"]
|
||||
@ -23,7 +23,7 @@ rules:
|
||||
resources: ["jobs", "cronjobs"]
|
||||
verbs: ["*"]
|
||||
- apiGroups: ["policy"]
|
||||
resources: ["poddisruptionbudgets"]
|
||||
resources: ["poddisruptionbudgets", "podsecuritypolicies"]
|
||||
verbs: ["*"]
|
||||
- apiGroups: ["scheduling.k8s.io"]
|
||||
resources: ["priorityclasses"]
|
||||
@ -37,10 +37,15 @@ rules:
|
||||
- apiGroups: ["longhorn.io"]
|
||||
resources: ["volumes", "volumes/status", "engines", "engines/status", "replicas", "replicas/status", "settings",
|
||||
"engineimages", "engineimages/status", "nodes", "nodes/status", "instancemanagers", "instancemanagers/status",
|
||||
{{- if .Values.openshift.enabled }}
|
||||
"engineimages/finalizers", "nodes/finalizers", "instancemanagers/finalizers",
|
||||
{{- end }}
|
||||
"sharemanagers", "sharemanagers/status", "backingimages", "backingimages/status",
|
||||
"backingimagemanagers", "backingimagemanagers/status", "backingimagedatasources", "backingimagedatasources/status",
|
||||
"backuptargets", "backuptargets/status", "backupvolumes", "backupvolumes/status", "backups", "backups/status",
|
||||
"recurringjobs", "recurringjobs/status"]
|
||||
"recurringjobs", "recurringjobs/status", "orphans", "orphans/status", "snapshots", "snapshots/status",
|
||||
"supportbundles", "supportbundles/status", "systembackups", "systembackups/status", "systemrestores", "systemrestores/status",
|
||||
"volumeattachments", "volumeattachments/status"]
|
||||
verbs: ["*"]
|
||||
- apiGroups: ["coordination.k8s.io"]
|
||||
resources: ["leases"]
|
||||
@ -48,3 +53,25 @@ rules:
|
||||
- apiGroups: ["metrics.k8s.io"]
|
||||
resources: ["pods", "nodes"]
|
||||
verbs: ["get", "list"]
|
||||
- apiGroups: ["apiregistration.k8s.io"]
|
||||
resources: ["apiservices"]
|
||||
verbs: ["list", "watch"]
|
||||
- apiGroups: ["admissionregistration.k8s.io"]
|
||||
resources: ["mutatingwebhookconfigurations", "validatingwebhookconfigurations"]
|
||||
verbs: ["get", "list", "create", "patch", "delete"]
|
||||
- apiGroups: ["rbac.authorization.k8s.io"]
|
||||
resources: ["roles", "rolebindings", "clusterrolebindings", "clusterroles"]
|
||||
verbs: ["*"]
|
||||
{{- if .Values.openshift.enabled }}
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: longhorn-ocp-privileged-role
|
||||
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
||||
rules:
|
||||
- apiGroups: ["security.openshift.io"]
|
||||
resources: ["securitycontextconstraints"]
|
||||
resourceNames: ["anyuid", "privileged"]
|
||||
verbs: ["use"]
|
||||
{{- end }}
|
||||
|
@ -11,3 +11,39 @@ subjects:
|
||||
- kind: ServiceAccount
|
||||
name: longhorn-service-account
|
||||
namespace: {{ include "release_namespace" . }}
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: longhorn-support-bundle
|
||||
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: cluster-admin
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: longhorn-support-bundle
|
||||
namespace: {{ include "release_namespace" . }}
|
||||
{{- if .Values.openshift.enabled }}
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: longhorn-ocp-privileged-bind
|
||||
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: longhorn-ocp-privileged-role
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: longhorn-service-account
|
||||
namespace: {{ include "release_namespace" . }}
|
||||
- kind: ServiceAccount
|
||||
name: longhorn-ui-service-account
|
||||
namespace: {{ include "release_namespace" . }}
|
||||
- kind: ServiceAccount
|
||||
name: default # supportbundle-agent-support-bundle uses default sa
|
||||
namespace: {{ include "release_namespace" . }}
|
||||
{{- end }}
|
||||
|
File diff suppressed because it is too large
Load Diff
@ -21,12 +21,15 @@ spec:
|
||||
containers:
|
||||
- name: longhorn-manager
|
||||
image: {{ template "registry_url" . }}{{ .Values.image.longhorn.manager.repository }}:{{ .Values.image.longhorn.manager.tag }}
|
||||
imagePullPolicy: IfNotPresent
|
||||
imagePullPolicy: {{ .Values.image.pullPolicy }}
|
||||
securityContext:
|
||||
privileged: true
|
||||
command:
|
||||
- longhorn-manager
|
||||
- -d
|
||||
{{- if eq .Values.longhornManager.log.format "json" }}
|
||||
- -j
|
||||
{{- end }}
|
||||
- daemon
|
||||
- --engine-image
|
||||
- "{{ template "registry_url" . }}{{ .Values.image.longhorn.engine.repository }}:{{ .Values.image.longhorn.engine.tag }}"
|
||||
@ -36,6 +39,8 @@ spec:
|
||||
- "{{ template "registry_url" . }}{{ .Values.image.longhorn.shareManager.repository }}:{{ .Values.image.longhorn.shareManager.tag }}"
|
||||
- --backing-image-manager-image
|
||||
- "{{ template "registry_url" . }}{{ .Values.image.longhorn.backingImageManager.repository }}:{{ .Values.image.longhorn.backingImageManager.tag }}"
|
||||
- --support-bundle-manager-image
|
||||
- "{{ template "registry_url" . }}{{ .Values.image.longhorn.supportBundleKit.repository }}:{{ .Values.image.longhorn.supportBundleKit.tag }}"
|
||||
- --manager-image
|
||||
- "{{ template "registry_url" . }}{{ .Values.image.longhorn.manager.repository }}:{{ .Values.image.longhorn.manager.tag }}"
|
||||
- --service-account
|
||||
@ -43,9 +48,17 @@ spec:
|
||||
ports:
|
||||
- containerPort: 9500
|
||||
name: manager
|
||||
- containerPort: 9501
|
||||
name: conversion-wh
|
||||
- containerPort: 9502
|
||||
name: admission-wh
|
||||
- containerPort: 9503
|
||||
name: recov-backend
|
||||
readinessProbe:
|
||||
tcpSocket:
|
||||
port: 9500
|
||||
httpGet:
|
||||
path: /v1/healthz
|
||||
port: 9501
|
||||
scheme: HTTPS
|
||||
volumeMounts:
|
||||
- name: dev
|
||||
mountPath: /host/dev/
|
||||
@ -54,8 +67,8 @@ spec:
|
||||
- name: longhorn
|
||||
mountPath: /var/lib/longhorn/
|
||||
mountPropagation: Bidirectional
|
||||
- name: longhorn-default-setting
|
||||
mountPath: /var/lib/longhorn-setting/
|
||||
- name: longhorn-grpc-tls
|
||||
mountPath: /tls-files/
|
||||
env:
|
||||
- name: POD_NAMESPACE
|
||||
valueFrom:
|
||||
@ -69,8 +82,6 @@ spec:
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: spec.nodeName
|
||||
- name: DEFAULT_SETTING_PATH
|
||||
value: /var/lib/longhorn-setting/default-setting.yaml
|
||||
volumes:
|
||||
- name: dev
|
||||
hostPath:
|
||||
@ -81,24 +92,35 @@ spec:
|
||||
- name: longhorn
|
||||
hostPath:
|
||||
path: /var/lib/longhorn/
|
||||
- name: longhorn-default-setting
|
||||
configMap:
|
||||
name: longhorn-default-setting
|
||||
- name: longhorn-grpc-tls
|
||||
secret:
|
||||
secretName: longhorn-grpc-tls
|
||||
optional: true
|
||||
{{- if .Values.privateRegistry.registrySecret }}
|
||||
imagePullSecrets:
|
||||
- name: {{ .Values.privateRegistry.registrySecret }}
|
||||
{{- end }}
|
||||
{{- if .Values.longhornManager.priorityClass }}
|
||||
priorityClassName: {{ .Values.longhornManager.priorityClass | quote}}
|
||||
priorityClassName: {{ .Values.longhornManager.priorityClass | quote }}
|
||||
{{- end }}
|
||||
{{- if or .Values.longhornManager.tolerations .Values.global.cattle.windowsCluster.enabled }}
|
||||
tolerations:
|
||||
{{- if and .Values.global.cattle.windowsCluster.enabled .Values.global.cattle.windowsCluster.tolerations }}
|
||||
{{ toYaml .Values.global.cattle.windowsCluster.tolerations | indent 6 }}
|
||||
{{- end }}
|
||||
{{- if .Values.longhornManager.tolerations }}
|
||||
tolerations:
|
||||
{{ toYaml .Values.longhornManager.tolerations | indent 6 }}
|
||||
{{- end }}
|
||||
{{- if .Values.longhornManager.nodeSelector }}
|
||||
{{- end }}
|
||||
{{- if or .Values.longhornManager.nodeSelector .Values.global.cattle.windowsCluster.enabled }}
|
||||
nodeSelector:
|
||||
{{- if and .Values.global.cattle.windowsCluster.enabled .Values.global.cattle.windowsCluster.nodeSelector }}
|
||||
{{ toYaml .Values.global.cattle.windowsCluster.nodeSelector | indent 8 }}
|
||||
{{- end }}
|
||||
{{- if .Values.longhornManager.nodeSelector }}
|
||||
{{ toYaml .Values.longhornManager.nodeSelector | indent 8 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
serviceAccountName: longhorn-service-account
|
||||
updateStrategy:
|
||||
rollingUpdate:
|
||||
@ -111,6 +133,10 @@ metadata:
|
||||
app: longhorn-manager
|
||||
name: longhorn-backend
|
||||
namespace: {{ include "release_namespace" . }}
|
||||
{{- if .Values.longhornManager.serviceAnnotations }}
|
||||
annotations:
|
||||
{{ toYaml .Values.longhornManager.serviceAnnotations | indent 4 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
type: {{ .Values.service.manager.type }}
|
||||
sessionAffinity: ClientIP
|
||||
|
@ -6,39 +6,81 @@ metadata:
|
||||
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
||||
data:
|
||||
default-setting.yaml: |-
|
||||
backup-target: {{ .Values.defaultSettings.backupTarget }}
|
||||
backup-target-credential-secret: {{ .Values.defaultSettings.backupTargetCredentialSecret }}
|
||||
allow-recurring-job-while-volume-detached: {{ .Values.defaultSettings.allowRecurringJobWhileVolumeDetached }}
|
||||
create-default-disk-labeled-nodes: {{ .Values.defaultSettings.createDefaultDiskLabeledNodes }}
|
||||
default-data-path: {{ .Values.defaultSettings.defaultDataPath }}
|
||||
replica-soft-anti-affinity: {{ .Values.defaultSettings.replicaSoftAntiAffinity }}
|
||||
replica-auto-balance: {{ .Values.defaultSettings.replicaAutoBalance }}
|
||||
storage-over-provisioning-percentage: {{ .Values.defaultSettings.storageOverProvisioningPercentage }}
|
||||
storage-minimal-available-percentage: {{ .Values.defaultSettings.storageMinimalAvailablePercentage }}
|
||||
upgrade-checker: {{ .Values.defaultSettings.upgradeChecker }}
|
||||
default-replica-count: {{ .Values.defaultSettings.defaultReplicaCount }}
|
||||
default-data-locality: {{ .Values.defaultSettings.defaultDataLocality }}
|
||||
default-longhorn-static-storage-class: {{ .Values.defaultSettings.defaultLonghornStaticStorageClass }}
|
||||
backupstore-poll-interval: {{ .Values.defaultSettings.backupstorePollInterval }}
|
||||
taint-toleration: {{ .Values.defaultSettings.taintToleration }}
|
||||
system-managed-components-node-selector: {{ .Values.defaultSettings.systemManagedComponentsNodeSelector }}
|
||||
priority-class: {{ .Values.defaultSettings.priorityClass }}
|
||||
auto-salvage: {{ .Values.defaultSettings.autoSalvage }}
|
||||
auto-delete-pod-when-volume-detached-unexpectedly: {{ .Values.defaultSettings.autoDeletePodWhenVolumeDetachedUnexpectedly }}
|
||||
disable-scheduling-on-cordoned-node: {{ .Values.defaultSettings.disableSchedulingOnCordonedNode }}
|
||||
replica-zone-soft-anti-affinity: {{ .Values.defaultSettings.replicaZoneSoftAntiAffinity }}
|
||||
node-down-pod-deletion-policy: {{ .Values.defaultSettings.nodeDownPodDeletionPolicy }}
|
||||
allow-node-drain-with-last-healthy-replica: {{ .Values.defaultSettings.allowNodeDrainWithLastHealthyReplica }}
|
||||
mkfs-ext4-parameters: {{ .Values.defaultSettings.mkfsExt4Parameters }}
|
||||
disable-replica-rebuild: {{ .Values.defaultSettings.disableReplicaRebuild }}
|
||||
replica-replenishment-wait-interval: {{ .Values.defaultSettings.replicaReplenishmentWaitInterval }}
|
||||
concurrent-replica-rebuild-per-node-limit: {{ .Values.defaultSettings.concurrentReplicaRebuildPerNodeLimit }}
|
||||
disable-revision-counter: {{ .Values.defaultSettings.disableRevisionCounter }}
|
||||
system-managed-pods-image-pull-policy: {{ .Values.defaultSettings.systemManagedPodsImagePullPolicy }}
|
||||
allow-volume-creation-with-degraded-availability: {{ .Values.defaultSettings.allowVolumeCreationWithDegradedAvailability }}
|
||||
auto-cleanup-system-generated-snapshot: {{ .Values.defaultSettings.autoCleanupSystemGeneratedSnapshot }}
|
||||
concurrent-automatic-engine-upgrade-per-node-limit: {{ .Values.defaultSettings.concurrentAutomaticEngineUpgradePerNodeLimit }}
|
||||
backing-image-cleanup-wait-interval: {{ .Values.defaultSettings.backingImageCleanupWaitInterval }}
|
||||
backing-image-recovery-wait-interval: {{ .Values.defaultSettings.backingImageRecoveryWaitInterval }}
|
||||
guaranteed-engine-manager-cpu: {{ .Values.defaultSettings.guaranteedEngineManagerCPU }}
|
||||
guaranteed-replica-manager-cpu: {{ .Values.defaultSettings.guaranteedReplicaManagerCPU }}
|
||||
{{ if not (kindIs "invalid" .Values.defaultSettings.backupTarget) }}backup-target: {{ .Values.defaultSettings.backupTarget }}{{ end }}
|
||||
{{ if not (kindIs "invalid" .Values.defaultSettings.backupTargetCredentialSecret) }}backup-target-credential-secret: {{ .Values.defaultSettings.backupTargetCredentialSecret }}{{ end }}
|
||||
{{ if not (kindIs "invalid" .Values.defaultSettings.allowRecurringJobWhileVolumeDetached) }}allow-recurring-job-while-volume-detached: {{ .Values.defaultSettings.allowRecurringJobWhileVolumeDetached }}{{ end }}
|
||||
{{ if not (kindIs "invalid" .Values.defaultSettings.createDefaultDiskLabeledNodes) }}create-default-disk-labeled-nodes: {{ .Values.defaultSettings.createDefaultDiskLabeledNodes }}{{ end }}
|
||||
{{ if not (kindIs "invalid" .Values.defaultSettings.defaultDataPath) }}default-data-path: {{ .Values.defaultSettings.defaultDataPath }}{{ end }}
|
||||
{{ if not (kindIs "invalid" .Values.defaultSettings.replicaSoftAntiAffinity) }}replica-soft-anti-affinity: {{ .Values.defaultSettings.replicaSoftAntiAffinity }}{{ end }}
|
||||
{{ if not (kindIs "invalid" .Values.defaultSettings.replicaAutoBalance) }}replica-auto-balance: {{ .Values.defaultSettings.replicaAutoBalance }}{{ end }}
|
||||
{{ if not (kindIs "invalid" .Values.defaultSettings.storageOverProvisioningPercentage) }}storage-over-provisioning-percentage: {{ .Values.defaultSettings.storageOverProvisioningPercentage }}{{ end }}
|
||||
{{ if not (kindIs "invalid" .Values.defaultSettings.storageMinimalAvailablePercentage) }}storage-minimal-available-percentage: {{ .Values.defaultSettings.storageMinimalAvailablePercentage }}{{ end }}
|
||||
{{ if not (kindIs "invalid" .Values.defaultSettings.storageReservedPercentageForDefaultDisk) }}storage-reserved-percentage-for-default-disk: {{ .Values.defaultSettings.storageReservedPercentageForDefaultDisk }}{{ end }}
|
||||
{{ if not (kindIs "invalid" .Values.defaultSettings.upgradeChecker) }}upgrade-checker: {{ .Values.defaultSettings.upgradeChecker }}{{ end }}
|
||||
{{ if not (kindIs "invalid" .Values.defaultSettings.defaultReplicaCount) }}default-replica-count: {{ .Values.defaultSettings.defaultReplicaCount }}{{ end }}
|
||||
{{ if not (kindIs "invalid" .Values.defaultSettings.defaultDataLocality) }}default-data-locality: {{ .Values.defaultSettings.defaultDataLocality }}{{ end }}
|
||||
{{ if not (kindIs "invalid" .Values.defaultSettings.defaultLonghornStaticStorageClass) }}default-longhorn-static-storage-class: {{ .Values.defaultSettings.defaultLonghornStaticStorageClass }}{{ end }}
|
||||
{{ if not (kindIs "invalid" .Values.defaultSettings.backupstorePollInterval) }}backupstore-poll-interval: {{ .Values.defaultSettings.backupstorePollInterval }}{{ end }}
|
||||
{{ if not (kindIs "invalid" .Values.defaultSettings.failedBackupTTL) }}failed-backup-ttl: {{ .Values.defaultSettings.failedBackupTTL }}{{ end }}
|
||||
{{ if not (kindIs "invalid" .Values.defaultSettings.restoreVolumeRecurringJobs) }}restore-volume-recurring-jobs: {{ .Values.defaultSettings.restoreVolumeRecurringJobs }}{{ end }}
|
||||
{{ if not (kindIs "invalid" .Values.defaultSettings.recurringSuccessfulJobsHistoryLimit) }}recurring-successful-jobs-history-limit: {{ .Values.defaultSettings.recurringSuccessfulJobsHistoryLimit }}{{ end }}
|
||||
{{ if not (kindIs "invalid" .Values.defaultSettings.recurringFailedJobsHistoryLimit) }}recurring-failed-jobs-history-limit: {{ .Values.defaultSettings.recurringFailedJobsHistoryLimit }}{{ end }}
|
||||
{{ if not (kindIs "invalid" .Values.defaultSettings.supportBundleFailedHistoryLimit) }}support-bundle-failed-history-limit: {{ .Values.defaultSettings.supportBundleFailedHistoryLimit }}{{ end }}
|
||||
{{- if or (not (kindIs "invalid" .Values.defaultSettings.taintToleration)) (.Values.global.cattle.windowsCluster.enabled) }}
|
||||
taint-toleration: {{ $windowsDefaultSettingTaintToleration := list }}{{ $defaultSettingTaintToleration := list -}}
|
||||
{{- if and .Values.global.cattle.windowsCluster.enabled .Values.global.cattle.windowsCluster.defaultSetting.taintToleration -}}
|
||||
{{- $windowsDefaultSettingTaintToleration = .Values.global.cattle.windowsCluster.defaultSetting.taintToleration -}}
|
||||
{{- end -}}
|
||||
{{- if not (kindIs "invalid" .Values.defaultSettings.taintToleration) -}}
|
||||
{{- $defaultSettingTaintToleration = .Values.defaultSettings.taintToleration -}}
|
||||
{{- end -}}
|
||||
{{- $taintToleration := list $windowsDefaultSettingTaintToleration $defaultSettingTaintToleration }}{{ join ";" (compact $taintToleration) -}}
|
||||
{{- end }}
|
||||
{{- if or (not (kindIs "invalid" .Values.defaultSettings.systemManagedComponentsNodeSelector)) (.Values.global.cattle.windowsCluster.enabled) }}
|
||||
system-managed-components-node-selector: {{ $windowsDefaultSettingNodeSelector := list }}{{ $defaultSettingNodeSelector := list -}}
|
||||
{{- if and .Values.global.cattle.windowsCluster.enabled .Values.global.cattle.windowsCluster.defaultSetting.systemManagedComponentsNodeSelector -}}
|
||||
{{ $windowsDefaultSettingNodeSelector = .Values.global.cattle.windowsCluster.defaultSetting.systemManagedComponentsNodeSelector -}}
|
||||
{{- end -}}
|
||||
{{- if not (kindIs "invalid" .Values.defaultSettings.systemManagedComponentsNodeSelector) -}}
|
||||
{{- $defaultSettingNodeSelector = .Values.defaultSettings.systemManagedComponentsNodeSelector -}}
|
||||
{{- end -}}
|
||||
{{- $nodeSelector := list $windowsDefaultSettingNodeSelector $defaultSettingNodeSelector }}{{ join ";" (compact $nodeSelector) -}}
|
||||
{{- end }}
|
||||
{{ if not (kindIs "invalid" .Values.defaultSettings.priorityClass) }}priority-class: {{ .Values.defaultSettings.priorityClass }}{{ end }}
|
||||
{{ if not (kindIs "invalid" .Values.defaultSettings.autoSalvage) }}auto-salvage: {{ .Values.defaultSettings.autoSalvage }}{{ end }}
|
||||
{{ if not (kindIs "invalid" .Values.defaultSettings.autoDeletePodWhenVolumeDetachedUnexpectedly) }}auto-delete-pod-when-volume-detached-unexpectedly: {{ .Values.defaultSettings.autoDeletePodWhenVolumeDetachedUnexpectedly }}{{ end }}
|
||||
{{ if not (kindIs "invalid" .Values.defaultSettings.disableSchedulingOnCordonedNode) }}disable-scheduling-on-cordoned-node: {{ .Values.defaultSettings.disableSchedulingOnCordonedNode }}{{ end }}
|
||||
{{ if not (kindIs "invalid" .Values.defaultSettings.replicaZoneSoftAntiAffinity) }}replica-zone-soft-anti-affinity: {{ .Values.defaultSettings.replicaZoneSoftAntiAffinity }}{{ end }}
|
||||
{{ if not (kindIs "invalid" .Values.defaultSettings.replicaDiskSoftAntiAffinity) }}replica-disk-soft-anti-affinity: {{ .Values.defaultSettings.replicaDiskSoftAntiAffinity }}{{ end }}
|
||||
{{ if not (kindIs "invalid" .Values.defaultSettings.nodeDownPodDeletionPolicy) }}node-down-pod-deletion-policy: {{ .Values.defaultSettings.nodeDownPodDeletionPolicy }}{{ end }}
|
||||
{{ if not (kindIs "invalid" .Values.defaultSettings.nodeDrainPolicy) }}node-drain-policy: {{ .Values.defaultSettings.nodeDrainPolicy }}{{ end }}
|
||||
{{ if not (kindIs "invalid" .Values.defaultSettings.replicaReplenishmentWaitInterval) }}replica-replenishment-wait-interval: {{ .Values.defaultSettings.replicaReplenishmentWaitInterval }}{{ end }}
|
||||
{{ if not (kindIs "invalid" .Values.defaultSettings.concurrentReplicaRebuildPerNodeLimit) }}concurrent-replica-rebuild-per-node-limit: {{ .Values.defaultSettings.concurrentReplicaRebuildPerNodeLimit }}{{ end }}
|
||||
{{ if not (kindIs "invalid" .Values.defaultSettings.concurrentVolumeBackupRestorePerNodeLimit) }}concurrent-volume-backup-restore-per-node-limit: {{ .Values.defaultSettings.concurrentVolumeBackupRestorePerNodeLimit }}{{ end }}
|
||||
{{ if not (kindIs "invalid" .Values.defaultSettings.disableRevisionCounter) }}disable-revision-counter: {{ .Values.defaultSettings.disableRevisionCounter }}{{ end }}
|
||||
{{ if not (kindIs "invalid" .Values.defaultSettings.systemManagedPodsImagePullPolicy) }}system-managed-pods-image-pull-policy: {{ .Values.defaultSettings.systemManagedPodsImagePullPolicy }}{{ end }}
|
||||
{{ if not (kindIs "invalid" .Values.defaultSettings.allowVolumeCreationWithDegradedAvailability) }}allow-volume-creation-with-degraded-availability: {{ .Values.defaultSettings.allowVolumeCreationWithDegradedAvailability }}{{ end }}
|
||||
{{ if not (kindIs "invalid" .Values.defaultSettings.autoCleanupSystemGeneratedSnapshot) }}auto-cleanup-system-generated-snapshot: {{ .Values.defaultSettings.autoCleanupSystemGeneratedSnapshot }}{{ end }}
|
||||
{{ if not (kindIs "invalid" .Values.defaultSettings.concurrentAutomaticEngineUpgradePerNodeLimit) }}concurrent-automatic-engine-upgrade-per-node-limit: {{ .Values.defaultSettings.concurrentAutomaticEngineUpgradePerNodeLimit }}{{ end }}
|
||||
{{ if not (kindIs "invalid" .Values.defaultSettings.backingImageCleanupWaitInterval) }}backing-image-cleanup-wait-interval: {{ .Values.defaultSettings.backingImageCleanupWaitInterval }}{{ end }}
|
||||
{{ if not (kindIs "invalid" .Values.defaultSettings.backingImageRecoveryWaitInterval) }}backing-image-recovery-wait-interval: {{ .Values.defaultSettings.backingImageRecoveryWaitInterval }}{{ end }}
|
||||
{{ if not (kindIs "invalid" .Values.defaultSettings.guaranteedInstanceManagerCPU) }}guaranteed-instance-manager-cpu: {{ .Values.defaultSettings.guaranteedInstanceManagerCPU }}{{ end }}
|
||||
{{ if not (kindIs "invalid" .Values.defaultSettings.kubernetesClusterAutoscalerEnabled) }}kubernetes-cluster-autoscaler-enabled: {{ .Values.defaultSettings.kubernetesClusterAutoscalerEnabled }}{{ end }}
|
||||
{{ if not (kindIs "invalid" .Values.defaultSettings.orphanAutoDeletion) }}orphan-auto-deletion: {{ .Values.defaultSettings.orphanAutoDeletion }}{{ end }}
|
||||
{{ if not (kindIs "invalid" .Values.defaultSettings.storageNetwork) }}storage-network: {{ .Values.defaultSettings.storageNetwork }}{{ end }}
|
||||
{{ if not (kindIs "invalid" .Values.defaultSettings.deletingConfirmationFlag) }}deleting-confirmation-flag: {{ .Values.defaultSettings.deletingConfirmationFlag }}{{ end }}
|
||||
{{ if not (kindIs "invalid" .Values.defaultSettings.engineReplicaTimeout) }}engine-replica-timeout: {{ .Values.defaultSettings.engineReplicaTimeout }}{{ end }}
|
||||
{{ if not (kindIs "invalid" .Values.defaultSettings.snapshotDataIntegrity) }}snapshot-data-integrity: {{ .Values.defaultSettings.snapshotDataIntegrity }}{{ end }}
|
||||
{{ if not (kindIs "invalid" .Values.defaultSettings.snapshotDataIntegrityImmediateCheckAfterSnapshotCreation) }}snapshot-data-integrity-immediate-check-after-snapshot-creation: {{ .Values.defaultSettings.snapshotDataIntegrityImmediateCheckAfterSnapshotCreation }}{{ end }}
|
||||
{{ if not (kindIs "invalid" .Values.defaultSettings.snapshotDataIntegrityCronjob) }}snapshot-data-integrity-cronjob: {{ .Values.defaultSettings.snapshotDataIntegrityCronjob }}{{ end }}
|
||||
{{ if not (kindIs "invalid" .Values.defaultSettings.removeSnapshotsDuringFilesystemTrim) }}remove-snapshots-during-filesystem-trim: {{ .Values.defaultSettings.removeSnapshotsDuringFilesystemTrim }}{{ end }}
|
||||
{{ if not (kindIs "invalid" .Values.defaultSettings.fastReplicaRebuildEnabled) }}fast-replica-rebuild-enabled: {{ .Values.defaultSettings.fastReplicaRebuildEnabled }}{{ end }}
|
||||
{{ if not (kindIs "invalid" .Values.defaultSettings.replicaFileSyncHttpClientTimeout) }}replica-file-sync-http-client-timeout: {{ .Values.defaultSettings.replicaFileSyncHttpClientTimeout }}{{ end }}
|
||||
{{ if not (kindIs "invalid" .Values.defaultSettings.logLevel) }}log-level: {{ .Values.defaultSettings.logLevel }}{{ end }}
|
||||
{{ if not (kindIs "invalid" .Values.defaultSettings.backupCompressionMethod) }}backup-compression-method: {{ .Values.defaultSettings.backupCompressionMethod }}{{ end }}
|
||||
{{ if not (kindIs "invalid" .Values.defaultSettings.backupConcurrentLimit) }}backup-concurrent-limit: {{ .Values.defaultSettings.backupConcurrentLimit }}{{ end }}
|
||||
{{ if not (kindIs "invalid" .Values.defaultSettings.restoreConcurrentLimit) }}restore-concurrent-limit: {{ .Values.defaultSettings.restoreConcurrentLimit }}{{ end }}
|
||||
{{ if not (kindIs "invalid" .Values.defaultSettings.v2DataEngine) }}v2-data-engine: {{ .Values.defaultSettings.v2DataEngine }}{{ end }}
|
||||
{{ if not (kindIs "invalid" .Values.defaultSettings.offlineReplicaRebuilding) }}offline-replica-rebuilding: {{ .Values.defaultSettings.offlineReplicaRebuilding }}{{ end }}
|
||||
{{ if not (kindIs "invalid" .Values.defaultSettings.allowEmptyNodeSelectorVolume) }}allow-empty-node-selector-volume: {{ .Values.defaultSettings.allowEmptyNodeSelectorVolume }}{{ end }}
|
||||
{{ if not (kindIs "invalid" .Values.defaultSettings.allowEmptyDiskSelectorVolume) }}allow-empty-disk-selector-volume: {{ .Values.defaultSettings.allowEmptyDiskSelectorVolume }}{{ end }}
|
||||
|
@ -21,7 +21,7 @@ spec:
|
||||
containers:
|
||||
- name: longhorn-driver-deployer
|
||||
image: {{ template "registry_url" . }}{{ .Values.image.longhorn.manager.repository }}:{{ .Values.image.longhorn.manager.tag }}
|
||||
imagePullPolicy: IfNotPresent
|
||||
imagePullPolicy: {{ .Values.image.pullPolicy }}
|
||||
command:
|
||||
- longhorn-manager
|
||||
- -d
|
||||
@ -67,6 +67,10 @@ spec:
|
||||
- name: CSI_SNAPSHOTTER_IMAGE
|
||||
value: "{{ template "registry_url" . }}{{ .Values.image.csi.snapshotter.repository }}:{{ .Values.image.csi.snapshotter.tag }}"
|
||||
{{- end }}
|
||||
{{- if and .Values.image.csi.livenessProbe.repository .Values.image.csi.livenessProbe.tag }}
|
||||
- name: CSI_LIVENESS_PROBE_IMAGE
|
||||
value: "{{ template "registry_url" . }}{{ .Values.image.csi.livenessProbe.repository }}:{{ .Values.image.csi.livenessProbe.tag }}"
|
||||
{{- end }}
|
||||
{{- if .Values.csi.attacherReplicaCount }}
|
||||
- name: CSI_ATTACHER_REPLICA_COUNT
|
||||
value: {{ .Values.csi.attacherReplicaCount | quote }}
|
||||
@ -89,16 +93,26 @@ spec:
|
||||
- name: {{ .Values.privateRegistry.registrySecret }}
|
||||
{{- end }}
|
||||
{{- if .Values.longhornDriver.priorityClass }}
|
||||
priorityClassName: {{ .Values.longhornDriver.priorityClass | quote}}
|
||||
priorityClassName: {{ .Values.longhornDriver.priorityClass | quote }}
|
||||
{{- end }}
|
||||
{{- if or .Values.longhornDriver.tolerations .Values.global.cattle.windowsCluster.enabled }}
|
||||
tolerations:
|
||||
{{- if and .Values.global.cattle.windowsCluster.enabled .Values.global.cattle.windowsCluster.tolerations }}
|
||||
{{ toYaml .Values.global.cattle.windowsCluster.tolerations | indent 6 }}
|
||||
{{- end }}
|
||||
{{- if .Values.longhornDriver.tolerations }}
|
||||
tolerations:
|
||||
{{ toYaml .Values.longhornDriver.tolerations | indent 6 }}
|
||||
{{- end }}
|
||||
{{- if .Values.longhornDriver.nodeSelector }}
|
||||
{{- end }}
|
||||
{{- if or .Values.longhornDriver.nodeSelector .Values.global.cattle.windowsCluster.enabled }}
|
||||
nodeSelector:
|
||||
{{- if and .Values.global.cattle.windowsCluster.enabled .Values.global.cattle.windowsCluster.nodeSelector }}
|
||||
{{ toYaml .Values.global.cattle.windowsCluster.nodeSelector | indent 8 }}
|
||||
{{- end }}
|
||||
{{- if .Values.longhornDriver.nodeSelector }}
|
||||
{{ toYaml .Values.longhornDriver.nodeSelector | indent 8 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
serviceAccountName: longhorn-service-account
|
||||
securityContext:
|
||||
runAsUser: 0
|
||||
|
@ -1,3 +1,41 @@
|
||||
{{- if .Values.openshift.enabled }}
|
||||
{{- if .Values.openshift.ui.route }}
|
||||
# https://github.com/openshift/oauth-proxy/blob/master/contrib/sidecar.yaml
|
||||
# Create a proxy service account and ensure it will use the route "proxy"
|
||||
# Create a secure connection to the proxy via a route
|
||||
apiVersion: route.openshift.io/v1
|
||||
kind: Route
|
||||
metadata:
|
||||
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
||||
app: longhorn-ui
|
||||
name: {{ .Values.openshift.ui.route }}
|
||||
namespace: {{ include "release_namespace" . }}
|
||||
spec:
|
||||
to:
|
||||
kind: Service
|
||||
name: longhorn-ui
|
||||
tls:
|
||||
termination: reencrypt
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
||||
app: longhorn-ui
|
||||
name: longhorn-ui
|
||||
namespace: {{ include "release_namespace" . }}
|
||||
annotations:
|
||||
service.alpha.openshift.io/serving-cert-secret-name: longhorn-ui-tls
|
||||
spec:
|
||||
ports:
|
||||
- name: longhorn-ui
|
||||
port: {{ .Values.openshift.ui.port | default 443 }}
|
||||
targetPort: {{ .Values.openshift.ui.proxy | default 8443 }}
|
||||
selector:
|
||||
app: longhorn-ui
|
||||
---
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
@ -6,7 +44,7 @@ metadata:
|
||||
name: longhorn-ui
|
||||
namespace: {{ include "release_namespace" . }}
|
||||
spec:
|
||||
replicas: 1
|
||||
replicas: {{ .Values.longhornUI.replicas }}
|
||||
selector:
|
||||
matchLabels:
|
||||
app: longhorn-ui
|
||||
@ -15,33 +53,99 @@ spec:
|
||||
labels: {{- include "longhorn.labels" . | nindent 8 }}
|
||||
app: longhorn-ui
|
||||
spec:
|
||||
serviceAccountName: longhorn-ui-service-account
|
||||
affinity:
|
||||
podAntiAffinity:
|
||||
preferredDuringSchedulingIgnoredDuringExecution:
|
||||
- weight: 1
|
||||
podAffinityTerm:
|
||||
labelSelector:
|
||||
matchExpressions:
|
||||
- key: app
|
||||
operator: In
|
||||
values:
|
||||
- longhorn-ui
|
||||
topologyKey: kubernetes.io/hostname
|
||||
containers:
|
||||
{{- if .Values.openshift.enabled }}
|
||||
{{- if .Values.openshift.ui.route }}
|
||||
- name: oauth-proxy
|
||||
image: {{ template "registry_url" . }}{{ .Values.image.openshift.oauthProxy.repository }}:{{ .Values.image.openshift.oauthProxy.tag }}
|
||||
imagePullPolicy: IfNotPresent
|
||||
ports:
|
||||
- containerPort: {{ .Values.openshift.ui.proxy | default 8443 }}
|
||||
name: public
|
||||
args:
|
||||
- --https-address=:{{ .Values.openshift.ui.proxy | default 8443 }}
|
||||
- --provider=openshift
|
||||
- --openshift-service-account=longhorn-ui-service-account
|
||||
- --upstream=http://localhost:8000
|
||||
- --tls-cert=/etc/tls/private/tls.crt
|
||||
- --tls-key=/etc/tls/private/tls.key
|
||||
- --cookie-secret=SECRET
|
||||
- --openshift-sar={"namespace":"{{ include "release_namespace" . }}","group":"longhorn.io","resource":"setting","verb":"delete"}
|
||||
volumeMounts:
|
||||
- mountPath: /etc/tls/private
|
||||
name: longhorn-ui-tls
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
- name: longhorn-ui
|
||||
image: {{ template "registry_url" . }}{{ .Values.image.longhorn.ui.repository }}:{{ .Values.image.longhorn.ui.tag }}
|
||||
imagePullPolicy: IfNotPresent
|
||||
securityContext:
|
||||
runAsUser: 0
|
||||
imagePullPolicy: {{ .Values.image.pullPolicy }}
|
||||
volumeMounts:
|
||||
- name : nginx-cache
|
||||
mountPath: /var/cache/nginx/
|
||||
- name : nginx-config
|
||||
mountPath: /var/config/nginx/
|
||||
- name: var-run
|
||||
mountPath: /var/run/
|
||||
ports:
|
||||
- containerPort: 8000
|
||||
name: http
|
||||
env:
|
||||
- name: LONGHORN_MANAGER_IP
|
||||
value: "http://longhorn-backend:9500"
|
||||
- name: LONGHORN_UI_PORT
|
||||
value: "8000"
|
||||
volumes:
|
||||
{{- if .Values.openshift.enabled }}
|
||||
{{- if .Values.openshift.ui.route }}
|
||||
- name: longhorn-ui-tls
|
||||
secret:
|
||||
secretName: longhorn-ui-tls
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
- emptyDir: {}
|
||||
name: nginx-cache
|
||||
- emptyDir: {}
|
||||
name: nginx-config
|
||||
- emptyDir: {}
|
||||
name: var-run
|
||||
{{- if .Values.privateRegistry.registrySecret }}
|
||||
imagePullSecrets:
|
||||
- name: {{ .Values.privateRegistry.registrySecret }}
|
||||
{{- end }}
|
||||
{{- if .Values.longhornUI.priorityClass }}
|
||||
priorityClassName: {{ .Values.longhornUI.priorityClass | quote}}
|
||||
priorityClassName: {{ .Values.longhornUI.priorityClass | quote }}
|
||||
{{- end }}
|
||||
{{- if or .Values.longhornUI.tolerations .Values.global.cattle.windowsCluster.enabled }}
|
||||
tolerations:
|
||||
{{- if and .Values.global.cattle.windowsCluster.enabled .Values.global.cattle.windowsCluster.tolerations }}
|
||||
{{ toYaml .Values.global.cattle.windowsCluster.tolerations | indent 6 }}
|
||||
{{- end }}
|
||||
{{- if .Values.longhornUI.tolerations }}
|
||||
tolerations:
|
||||
{{ toYaml .Values.longhornUI.tolerations | indent 6 }}
|
||||
{{- end }}
|
||||
{{- if .Values.longhornUI.nodeSelector }}
|
||||
{{- end }}
|
||||
{{- if or .Values.longhornUI.nodeSelector .Values.global.cattle.windowsCluster.enabled }}
|
||||
nodeSelector:
|
||||
{{- if and .Values.global.cattle.windowsCluster.enabled .Values.global.cattle.windowsCluster.nodeSelector }}
|
||||
{{ toYaml .Values.global.cattle.windowsCluster.nodeSelector | indent 8 }}
|
||||
{{- end }}
|
||||
{{- if .Values.longhornUI.nodeSelector }}
|
||||
{{ toYaml .Values.longhornUI.nodeSelector | indent 8 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
---
|
||||
kind: Service
|
||||
apiVersion: v1
|
||||
@ -59,6 +163,12 @@ spec:
|
||||
{{- else }}
|
||||
type: {{ .Values.service.ui.type }}
|
||||
{{- end }}
|
||||
{{- if and .Values.service.ui.loadBalancerIP (eq .Values.service.ui.type "LoadBalancer") }}
|
||||
loadBalancerIP: {{ .Values.service.ui.loadBalancerIP }}
|
||||
{{- end }}
|
||||
{{- if and (eq .Values.service.ui.type "LoadBalancer") .Values.service.ui.loadBalancerSourceRanges }}
|
||||
loadBalancerSourceRanges: {{- toYaml .Values.service.ui.loadBalancerSourceRanges | nindent 4 }}
|
||||
{{- end }}
|
||||
selector:
|
||||
app: longhorn-ui
|
||||
ports:
|
||||
|
@ -11,7 +11,7 @@ metadata:
|
||||
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
||||
app: longhorn-ingress
|
||||
annotations:
|
||||
{{- if .Values.ingress.tls }}
|
||||
{{- if .Values.ingress.secureBackends }}
|
||||
ingress.kubernetes.io/secure-backends: "true"
|
||||
{{- end }}
|
||||
{{- range $key, $value := .Values.ingress.annotations }}
|
||||
|
@ -0,0 +1,27 @@
|
||||
{{- if .Values.networkPolicies.enabled }}
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
metadata:
|
||||
name: backing-image-data-source
|
||||
namespace: longhorn-system
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
longhorn.io/component: backing-image-data-source
|
||||
policyTypes:
|
||||
- Ingress
|
||||
ingress:
|
||||
- from:
|
||||
- podSelector:
|
||||
matchLabels:
|
||||
app: longhorn-manager
|
||||
- podSelector:
|
||||
matchLabels:
|
||||
longhorn.io/component: instance-manager
|
||||
- podSelector:
|
||||
matchLabels:
|
||||
longhorn.io/component: backing-image-manager
|
||||
- podSelector:
|
||||
matchLabels:
|
||||
longhorn.io/component: backing-image-data-source
|
||||
{{- end }}
|
@ -0,0 +1,27 @@
|
||||
{{- if .Values.networkPolicies.enabled }}
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
metadata:
|
||||
name: backing-image-manager
|
||||
namespace: longhorn-system
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
longhorn.io/component: backing-image-manager
|
||||
policyTypes:
|
||||
- Ingress
|
||||
ingress:
|
||||
- from:
|
||||
- podSelector:
|
||||
matchLabels:
|
||||
app: longhorn-manager
|
||||
- podSelector:
|
||||
matchLabels:
|
||||
longhorn.io/component: instance-manager
|
||||
- podSelector:
|
||||
matchLabels:
|
||||
longhorn.io/component: backing-image-manager
|
||||
- podSelector:
|
||||
matchLabels:
|
||||
longhorn.io/component: backing-image-data-source
|
||||
{{- end }}
|
@ -0,0 +1,27 @@
|
||||
{{- if .Values.networkPolicies.enabled }}
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
metadata:
|
||||
name: instance-manager
|
||||
namespace: longhorn-system
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
longhorn.io/component: instance-manager
|
||||
policyTypes:
|
||||
- Ingress
|
||||
ingress:
|
||||
- from:
|
||||
- podSelector:
|
||||
matchLabels:
|
||||
app: longhorn-manager
|
||||
- podSelector:
|
||||
matchLabels:
|
||||
longhorn.io/component: instance-manager
|
||||
- podSelector:
|
||||
matchLabels:
|
||||
longhorn.io/component: backing-image-manager
|
||||
- podSelector:
|
||||
matchLabels:
|
||||
longhorn.io/component: backing-image-data-source
|
||||
{{- end }}
|
35
chart/templates/network-policies/manager-network-policy.yaml
Normal file
35
chart/templates/network-policies/manager-network-policy.yaml
Normal file
@ -0,0 +1,35 @@
|
||||
{{- if .Values.networkPolicies.enabled }}
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
metadata:
|
||||
name: longhorn-manager
|
||||
namespace: longhorn-system
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app: longhorn-manager
|
||||
policyTypes:
|
||||
- Ingress
|
||||
ingress:
|
||||
- from:
|
||||
- podSelector:
|
||||
matchLabels:
|
||||
app: longhorn-manager
|
||||
- podSelector:
|
||||
matchLabels:
|
||||
app: longhorn-ui
|
||||
- podSelector:
|
||||
matchLabels:
|
||||
app: longhorn-csi-plugin
|
||||
- podSelector:
|
||||
matchLabels:
|
||||
longhorn.io/managed-by: longhorn-manager
|
||||
matchExpressions:
|
||||
- { key: recurring-job.longhorn.io, operator: Exists }
|
||||
- podSelector:
|
||||
matchExpressions:
|
||||
- { key: longhorn.io/job-task, operator: Exists }
|
||||
- podSelector:
|
||||
matchLabels:
|
||||
app: longhorn-driver-deployer
|
||||
{{- end }}
|
@ -0,0 +1,17 @@
|
||||
{{- if .Values.networkPolicies.enabled }}
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
metadata:
|
||||
name: longhorn-recovery-backend
|
||||
namespace: longhorn-system
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app: longhorn-manager
|
||||
policyTypes:
|
||||
- Ingress
|
||||
ingress:
|
||||
- ports:
|
||||
- protocol: TCP
|
||||
port: 9503
|
||||
{{- end }}
|
@ -0,0 +1,46 @@
|
||||
{{- if and .Values.networkPolicies.enabled .Values.ingress.enabled (not (eq .Values.networkPolicies.type "")) }}
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
metadata:
|
||||
name: longhorn-ui-frontend
|
||||
namespace: longhorn-system
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app: longhorn-ui
|
||||
policyTypes:
|
||||
- Ingress
|
||||
ingress:
|
||||
- from:
|
||||
{{- if eq .Values.networkPolicies.type "rke1"}}
|
||||
- namespaceSelector:
|
||||
matchLabels:
|
||||
kubernetes.io/metadata.name: ingress-nginx
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/component: controller
|
||||
app.kubernetes.io/instance: ingress-nginx
|
||||
app.kubernetes.io/name: ingress-nginx
|
||||
{{- else if eq .Values.networkPolicies.type "rke2" }}
|
||||
- namespaceSelector:
|
||||
matchLabels:
|
||||
kubernetes.io/metadata.name: kube-system
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/component: controller
|
||||
app.kubernetes.io/instance: rke2-ingress-nginx
|
||||
app.kubernetes.io/name: rke2-ingress-nginx
|
||||
{{- else if eq .Values.networkPolicies.type "k3s" }}
|
||||
- namespaceSelector:
|
||||
matchLabels:
|
||||
kubernetes.io/metadata.name: kube-system
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: traefik
|
||||
ports:
|
||||
- port: 8000
|
||||
protocol: TCP
|
||||
- port: 80
|
||||
protocol: TCP
|
||||
{{- end }}
|
||||
{{- end }}
|
33
chart/templates/network-policies/webhook-network-policy.yaml
Normal file
33
chart/templates/network-policies/webhook-network-policy.yaml
Normal file
@ -0,0 +1,33 @@
|
||||
{{- if .Values.networkPolicies.enabled }}
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
metadata:
|
||||
name: longhorn-conversion-webhook
|
||||
namespace: longhorn-system
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app: longhorn-manager
|
||||
policyTypes:
|
||||
- Ingress
|
||||
ingress:
|
||||
- ports:
|
||||
- protocol: TCP
|
||||
port: 9501
|
||||
---
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
metadata:
|
||||
name: longhorn-admission-webhook
|
||||
namespace: longhorn-system
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app: longhorn-manager
|
||||
policyTypes:
|
||||
- Ingress
|
||||
ingress:
|
||||
- ports:
|
||||
- protocol: TCP
|
||||
port: 9502
|
||||
{{- end }}
|
@ -18,9 +18,7 @@ spec:
|
||||
containers:
|
||||
- name: longhorn-post-upgrade
|
||||
image: {{ template "registry_url" . }}{{ .Values.image.longhorn.manager.repository }}:{{ .Values.image.longhorn.manager.tag }}
|
||||
imagePullPolicy: IfNotPresent
|
||||
securityContext:
|
||||
privileged: true
|
||||
imagePullPolicy: {{ .Values.image.pullPolicy }}
|
||||
command:
|
||||
- longhorn-manager
|
||||
- post-upgrade
|
||||
@ -35,14 +33,24 @@ spec:
|
||||
- name: {{ .Values.privateRegistry.registrySecret }}
|
||||
{{- end }}
|
||||
{{- if .Values.longhornManager.priorityClass }}
|
||||
priorityClassName: {{ .Values.longhornManager.priorityClass | quote}}
|
||||
priorityClassName: {{ .Values.longhornManager.priorityClass | quote }}
|
||||
{{- end }}
|
||||
serviceAccountName: longhorn-service-account
|
||||
{{- if .Values.longhornManager.tolerations }}
|
||||
{{- if or .Values.longhornManager.tolerations .Values.global.cattle.windowsCluster.enabled }}
|
||||
tolerations:
|
||||
{{- if and .Values.global.cattle.windowsCluster.enabled .Values.global.cattle.windowsCluster.tolerations }}
|
||||
{{ toYaml .Values.global.cattle.windowsCluster.tolerations | indent 6 }}
|
||||
{{- end }}
|
||||
{{- if .Values.longhornManager.tolerations }}
|
||||
{{ toYaml .Values.longhornManager.tolerations | indent 6 }}
|
||||
{{- end }}
|
||||
{{- if .Values.longhornManager.nodeSelector }}
|
||||
{{- end }}
|
||||
{{- if or .Values.longhornManager.nodeSelector .Values.global.cattle.windowsCluster.enabled }}
|
||||
nodeSelector:
|
||||
{{- if and .Values.global.cattle.windowsCluster.enabled .Values.global.cattle.windowsCluster.nodeSelector }}
|
||||
{{ toYaml .Values.global.cattle.windowsCluster.nodeSelector | indent 8 }}
|
||||
{{- end }}
|
||||
{{- if .Values.longhornManager.nodeSelector }}
|
||||
{{ toYaml .Values.longhornManager.nodeSelector | indent 8 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
58
chart/templates/preupgrade-job.yaml
Normal file
58
chart/templates/preupgrade-job.yaml
Normal file
@ -0,0 +1,58 @@
|
||||
{{- if .Values.helmPreUpgradeCheckerJob.enabled }}
|
||||
apiVersion: batch/v1
|
||||
kind: Job
|
||||
metadata:
|
||||
annotations:
|
||||
"helm.sh/hook": pre-upgrade
|
||||
"helm.sh/hook-delete-policy": hook-succeeded,before-hook-creation,hook-failed
|
||||
name: longhorn-pre-upgrade
|
||||
namespace: {{ include "release_namespace" . }}
|
||||
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
||||
spec:
|
||||
activeDeadlineSeconds: 900
|
||||
backoffLimit: 1
|
||||
template:
|
||||
metadata:
|
||||
name: longhorn-pre-upgrade
|
||||
labels: {{- include "longhorn.labels" . | nindent 8 }}
|
||||
spec:
|
||||
containers:
|
||||
- name: longhorn-pre-upgrade
|
||||
image: {{ template "registry_url" . }}{{ .Values.image.longhorn.manager.repository }}:{{ .Values.image.longhorn.manager.tag }}
|
||||
imagePullPolicy: {{ .Values.image.pullPolicy }}
|
||||
command:
|
||||
- longhorn-manager
|
||||
- pre-upgrade
|
||||
env:
|
||||
- name: POD_NAMESPACE
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.namespace
|
||||
restartPolicy: OnFailure
|
||||
{{- if .Values.privateRegistry.registrySecret }}
|
||||
imagePullSecrets:
|
||||
- name: {{ .Values.privateRegistry.registrySecret }}
|
||||
{{- end }}
|
||||
{{- if .Values.longhornManager.priorityClass }}
|
||||
priorityClassName: {{ .Values.longhornManager.priorityClass | quote }}
|
||||
{{- end }}
|
||||
serviceAccountName: longhorn-service-account
|
||||
{{- if or .Values.longhornManager.tolerations .Values.global.cattle.windowsCluster.enabled }}
|
||||
tolerations:
|
||||
{{- if and .Values.global.cattle.windowsCluster.enabled .Values.global.cattle.windowsCluster.tolerations }}
|
||||
{{ toYaml .Values.global.cattle.windowsCluster.tolerations | indent 6 }}
|
||||
{{- end }}
|
||||
{{- if .Values.longhornManager.tolerations }}
|
||||
{{ toYaml .Values.longhornManager.tolerations | indent 6 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- if or .Values.longhornManager.nodeSelector .Values.global.cattle.windowsCluster.enabled }}
|
||||
nodeSelector:
|
||||
{{- if and .Values.global.cattle.windowsCluster.enabled .Values.global.cattle.windowsCluster.nodeSelector }}
|
||||
{{ toYaml .Values.global.cattle.windowsCluster.nodeSelector | indent 8 }}
|
||||
{{- end }}
|
||||
{{- if .Values.longhornManager.nodeSelector }}
|
||||
{{ toYaml .Values.longhornManager.nodeSelector | indent 8 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
@ -1,3 +1,4 @@
|
||||
{{- if .Values.privateRegistry.createSecret }}
|
||||
{{- if .Values.privateRegistry.registrySecret }}
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
@ -9,3 +10,4 @@ type: kubernetes.io/dockerconfigjson
|
||||
data:
|
||||
.dockerconfigjson: {{ template "secret" . }}
|
||||
{{- end }}
|
||||
{{- end }}
|
@ -4,3 +4,37 @@ metadata:
|
||||
name: longhorn-service-account
|
||||
namespace: {{ include "release_namespace" . }}
|
||||
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
||||
{{- with .Values.serviceAccount.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: longhorn-ui-service-account
|
||||
namespace: {{ include "release_namespace" . }}
|
||||
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
||||
{{- with .Values.serviceAccount.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- if .Values.openshift.enabled }}
|
||||
{{- if .Values.openshift.ui.route }}
|
||||
{{- if not .Values.serviceAccount.annotations }}
|
||||
annotations:
|
||||
{{- end }}
|
||||
serviceaccounts.openshift.io/oauth-redirectreference.primary: '{"kind":"OAuthRedirectReference","apiVersion":"v1","reference":{"kind":"Route","name":"longhorn-ui"}}'
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: longhorn-support-bundle
|
||||
namespace: {{ include "release_namespace" . }}
|
||||
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
||||
{{- with .Values.serviceAccount.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
@ -1,10 +1,60 @@
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
||||
app: longhorn-conversion-webhook
|
||||
name: longhorn-conversion-webhook
|
||||
namespace: {{ include "release_namespace" . }}
|
||||
spec:
|
||||
type: ClusterIP
|
||||
sessionAffinity: ClientIP
|
||||
selector:
|
||||
app: longhorn-manager
|
||||
ports:
|
||||
- name: conversion-webhook
|
||||
port: 9501
|
||||
targetPort: conversion-wh
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
||||
app: longhorn-admission-webhook
|
||||
name: longhorn-admission-webhook
|
||||
namespace: {{ include "release_namespace" . }}
|
||||
spec:
|
||||
type: ClusterIP
|
||||
sessionAffinity: ClientIP
|
||||
selector:
|
||||
app: longhorn-manager
|
||||
ports:
|
||||
- name: admission-webhook
|
||||
port: 9502
|
||||
targetPort: admission-wh
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
||||
app: longhorn-recovery-backend
|
||||
name: longhorn-recovery-backend
|
||||
namespace: {{ include "release_namespace" . }}
|
||||
spec:
|
||||
type: ClusterIP
|
||||
sessionAffinity: ClientIP
|
||||
selector:
|
||||
app: longhorn-manager
|
||||
ports:
|
||||
- name: recovery-backend
|
||||
port: 9503
|
||||
targetPort: recov-backend
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
||||
name: longhorn-engine-manager
|
||||
namespace: longhorn-system
|
||||
namespace: {{ include "release_namespace" . }}
|
||||
spec:
|
||||
clusterIP: None
|
||||
selector:
|
||||
@ -16,7 +66,7 @@ kind: Service
|
||||
metadata:
|
||||
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
||||
name: longhorn-replica-manager
|
||||
namespace: longhorn-system
|
||||
namespace: {{ include "release_namespace" . }}
|
||||
spec:
|
||||
clusterIP: None
|
||||
selector:
|
||||
|
@ -21,7 +21,13 @@ data:
|
||||
staleReplicaTimeout: "30"
|
||||
fromBackup: ""
|
||||
{{- if .Values.persistence.defaultFsType }}
|
||||
fsType: "{{.Values.persistence.defaultFsType}}"
|
||||
fsType: "{{ .Values.persistence.defaultFsType }}"
|
||||
{{- end }}
|
||||
{{- if .Values.persistence.defaultMkfsParams }}
|
||||
mkfsParams: "{{ .Values.persistence.defaultMkfsParams }}"
|
||||
{{- end }}
|
||||
{{- if .Values.persistence.migratable }}
|
||||
migratable: "{{ .Values.persistence.migratable }}"
|
||||
{{- end }}
|
||||
{{- if .Values.persistence.backingImage.enable }}
|
||||
backingImage: {{ .Values.persistence.backingImage.name }}
|
||||
@ -32,3 +38,7 @@ data:
|
||||
{{- if .Values.persistence.recurringJobSelector.enable }}
|
||||
recurringJobSelector: '{{ .Values.persistence.recurringJobSelector.jobList }}'
|
||||
{{- end }}
|
||||
dataLocality: {{ .Values.persistence.defaultDataLocality | quote }}
|
||||
{{- if .Values.persistence.defaultNodeSelector.enable }}
|
||||
nodeSelector: "{{ .Values.persistence.defaultNodeSelector.selector }}"
|
||||
{{- end }}
|
||||
|
@ -3,7 +3,7 @@ kind: Job
|
||||
metadata:
|
||||
annotations:
|
||||
"helm.sh/hook": pre-delete
|
||||
"helm.sh/hook-delete-policy": hook-succeeded
|
||||
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
|
||||
name: longhorn-uninstall
|
||||
namespace: {{ include "release_namespace" . }}
|
||||
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
||||
@ -18,9 +18,7 @@ spec:
|
||||
containers:
|
||||
- name: longhorn-uninstall
|
||||
image: {{ template "registry_url" . }}{{ .Values.image.longhorn.manager.repository }}:{{ .Values.image.longhorn.manager.tag }}
|
||||
imagePullPolicy: IfNotPresent
|
||||
securityContext:
|
||||
privileged: true
|
||||
imagePullPolicy: {{ .Values.image.pullPolicy }}
|
||||
command:
|
||||
- longhorn-manager
|
||||
- uninstall
|
||||
@ -30,20 +28,30 @@ spec:
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.namespace
|
||||
restartPolicy: OnFailure
|
||||
restartPolicy: Never
|
||||
{{- if .Values.privateRegistry.registrySecret }}
|
||||
imagePullSecrets:
|
||||
- name: {{ .Values.privateRegistry.registrySecret }}
|
||||
{{- end }}
|
||||
{{- if .Values.longhornManager.priorityClass }}
|
||||
priorityClassName: {{ .Values.longhornManager.priorityClass | quote}}
|
||||
priorityClassName: {{ .Values.longhornManager.priorityClass | quote }}
|
||||
{{- end }}
|
||||
serviceAccountName: longhorn-service-account
|
||||
{{- if .Values.longhornManager.tolerations }}
|
||||
{{- if or .Values.longhornManager.tolerations .Values.global.cattle.windowsCluster.enabled }}
|
||||
tolerations:
|
||||
{{- if and .Values.global.cattle.windowsCluster.enabled .Values.global.cattle.windowsCluster.tolerations }}
|
||||
{{ toYaml .Values.global.cattle.windowsCluster.tolerations | indent 6 }}
|
||||
{{- end }}
|
||||
{{- if .Values.longhornManager.tolerations }}
|
||||
{{ toYaml .Values.longhornManager.tolerations | indent 6 }}
|
||||
{{- end }}
|
||||
{{- if .Values.longhornManager.nodeSelector }}
|
||||
{{- end }}
|
||||
{{- if or .Values.longhornManager.nodeSelector .Values.global.cattle.windowsCluster.enabled }}
|
||||
nodeSelector:
|
||||
{{- if and .Values.global.cattle.windowsCluster.enabled .Values.global.cattle.windowsCluster.nodeSelector }}
|
||||
{{ toYaml .Values.global.cattle.windowsCluster.nodeSelector | indent 8 }}
|
||||
{{- end }}
|
||||
{{- if or .Values.longhornManager.nodeSelector }}
|
||||
{{ toYaml .Values.longhornManager.nodeSelector | indent 8 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
7
chart/templates/validate-psp-install.yaml
Normal file
7
chart/templates/validate-psp-install.yaml
Normal file
@ -0,0 +1,7 @@
|
||||
#{{- if gt (len (lookup "rbac.authorization.k8s.io/v1" "ClusterRole" "" "")) 0 -}}
|
||||
#{{- if .Values.enablePSP }}
|
||||
#{{- if not (.Capabilities.APIVersions.Has "policy/v1beta1/PodSecurityPolicy") }}
|
||||
#{{- fail "The target cluster does not have the PodSecurityPolicy API resource. Please disable PSPs in this chart before proceeding." -}}
|
||||
#{{- end }}
|
||||
#{{- end }}
|
||||
#{{- end }}
|
@ -3,121 +3,350 @@
|
||||
# Declare variables to be passed into your templates.
|
||||
global:
|
||||
cattle:
|
||||
# -- System default registry
|
||||
systemDefaultRegistry: ""
|
||||
windowsCluster:
|
||||
# -- Enable this to allow Longhorn to run on the Rancher deployed Windows cluster
|
||||
enabled: false
|
||||
# -- Tolerate Linux nodes to run Longhorn user deployed components
|
||||
tolerations:
|
||||
- key: "cattle.io/os"
|
||||
value: "linux"
|
||||
effect: "NoSchedule"
|
||||
operator: "Equal"
|
||||
# -- Select Linux nodes to run Longhorn user deployed components
|
||||
nodeSelector:
|
||||
kubernetes.io/os: "linux"
|
||||
defaultSetting:
|
||||
# -- Toleration for Longhorn system managed components
|
||||
taintToleration: cattle.io/os=linux:NoSchedule
|
||||
# -- Node selector for Longhorn system managed components
|
||||
systemManagedComponentsNodeSelector: kubernetes.io/os:linux
|
||||
|
||||
networkPolicies:
|
||||
# -- Enable NetworkPolicies to limit access to the Longhorn pods
|
||||
enabled: false
|
||||
# -- Create the policy based on your distribution to allow access for the ingress. Options: `k3s`, `rke2`, `rke1`
|
||||
type: "k3s"
|
||||
|
||||
image:
|
||||
longhorn:
|
||||
engine:
|
||||
# -- Specify Longhorn engine image repository
|
||||
repository: longhornio/longhorn-engine
|
||||
tag: v1.2.3
|
||||
# -- Specify Longhorn engine image tag
|
||||
tag: master-head
|
||||
manager:
|
||||
# -- Specify Longhorn manager image repository
|
||||
repository: longhornio/longhorn-manager
|
||||
tag: v1.2.3
|
||||
# -- Specify Longhorn manager image tag
|
||||
tag: master-head
|
||||
ui:
|
||||
# -- Specify Longhorn ui image repository
|
||||
repository: longhornio/longhorn-ui
|
||||
tag: v1.2.3
|
||||
# -- Specify Longhorn ui image tag
|
||||
tag: master-head
|
||||
instanceManager:
|
||||
# -- Specify Longhorn instance manager image repository
|
||||
repository: longhornio/longhorn-instance-manager
|
||||
tag: v1_20211210
|
||||
# -- Specify Longhorn instance manager image tag
|
||||
tag: master-head
|
||||
shareManager:
|
||||
# -- Specify Longhorn share manager image repository
|
||||
repository: longhornio/longhorn-share-manager
|
||||
tag: v1_20211020
|
||||
# -- Specify Longhorn share manager image tag
|
||||
tag: master-head
|
||||
backingImageManager:
|
||||
# -- Specify Longhorn backing image manager image repository
|
||||
repository: longhornio/backing-image-manager
|
||||
tag: v2_20210820
|
||||
# -- Specify Longhorn backing image manager image tag
|
||||
tag: master-head
|
||||
supportBundleKit:
|
||||
# -- Specify Longhorn support bundle manager image repository
|
||||
repository: longhornio/support-bundle-kit
|
||||
# -- Specify Longhorn support bundle manager image tag
|
||||
tag: v0.0.27
|
||||
csi:
|
||||
attacher:
|
||||
# -- Specify CSI attacher image repository. Leave blank to autodetect
|
||||
repository: longhornio/csi-attacher
|
||||
tag: v3.2.1
|
||||
# -- Specify CSI attacher image tag. Leave blank to autodetect
|
||||
tag: v4.2.0
|
||||
provisioner:
|
||||
# -- Specify CSI provisioner image repository. Leave blank to autodetect
|
||||
repository: longhornio/csi-provisioner
|
||||
tag: v2.1.2
|
||||
# -- Specify CSI provisioner image tag. Leave blank to autodetect
|
||||
tag: v3.4.1
|
||||
nodeDriverRegistrar:
|
||||
# -- Specify CSI node driver registrar image repository. Leave blank to autodetect
|
||||
repository: longhornio/csi-node-driver-registrar
|
||||
tag: v2.3.0
|
||||
# -- Specify CSI node driver registrar image tag. Leave blank to autodetect
|
||||
tag: v2.7.0
|
||||
resizer:
|
||||
# -- Specify CSI driver resizer image repository. Leave blank to autodetect
|
||||
repository: longhornio/csi-resizer
|
||||
tag: v1.2.0
|
||||
# -- Specify CSI driver resizer image tag. Leave blank to autodetect
|
||||
tag: v1.7.0
|
||||
snapshotter:
|
||||
# -- Specify CSI driver snapshotter image repository. Leave blank to autodetect
|
||||
repository: longhornio/csi-snapshotter
|
||||
tag: v3.0.3
|
||||
# -- Specify CSI driver snapshotter image tag. Leave blank to autodetect.
|
||||
tag: v6.2.1
|
||||
livenessProbe:
|
||||
# -- Specify CSI liveness probe image repository. Leave blank to autodetect
|
||||
repository: longhornio/livenessprobe
|
||||
# -- Specify CSI liveness probe image tag. Leave blank to autodetect
|
||||
tag: v2.9.0
|
||||
openshift:
|
||||
oauthProxy:
|
||||
# -- For openshift user. Specify oauth proxy image repository
|
||||
repository: quay.io/openshift/origin-oauth-proxy
|
||||
# -- For openshift user. Specify oauth proxy image tag. Note: Use your OCP/OKD 4.X Version, Current Stable is 4.14
|
||||
tag: 4.14
|
||||
# -- Image pull policy which applies to all user deployed Longhorn Components. e.g, Longhorn manager, Longhorn driver, Longhorn UI
|
||||
pullPolicy: IfNotPresent
|
||||
|
||||
service:
|
||||
ui:
|
||||
# -- Define Longhorn UI service type. Options: `ClusterIP`, `NodePort`, `LoadBalancer`, `Rancher-Proxy`
|
||||
type: ClusterIP
|
||||
# -- NodePort port number (to set explicitly, choose port between 30000-32767)
|
||||
nodePort: null
|
||||
manager:
|
||||
# -- Define Longhorn manager service type.
|
||||
type: ClusterIP
|
||||
# -- NodePort port number (to set explicitly, choose port between 30000-32767)
|
||||
nodePort: ""
|
||||
|
||||
persistence:
|
||||
# -- Set Longhorn StorageClass as default
|
||||
defaultClass: true
|
||||
# -- Set filesystem type for Longhorn StorageClass
|
||||
defaultFsType: ext4
|
||||
# -- Set mkfs options for Longhorn StorageClass
|
||||
defaultMkfsParams: ""
|
||||
# -- Set replica count for Longhorn StorageClass
|
||||
defaultClassReplicaCount: 3
|
||||
# -- Set data locality for Longhorn StorageClass. Options: `disabled`, `best-effort`
|
||||
defaultDataLocality: disabled
|
||||
# -- Define reclaim policy. Options: `Retain`, `Delete`
|
||||
reclaimPolicy: Delete
|
||||
# -- Set volume migratable for Longhorn StorageClass
|
||||
migratable: false
|
||||
recurringJobSelector:
|
||||
# -- Enable recurring job selector for Longhorn StorageClass
|
||||
enable: false
|
||||
# -- Recurring job selector list for Longhorn StorageClass. Please be careful of quotes of input. e.g., `[{"name":"backup", "isGroup":true}]`
|
||||
jobList: []
|
||||
backingImage:
|
||||
# -- Set backing image for Longhorn StorageClass
|
||||
enable: false
|
||||
# -- Specify a backing image that will be used by Longhorn volumes in Longhorn StorageClass. If not exists, the backing image data source type and backing image data source parameters should be specified so that Longhorn will create the backing image before using it
|
||||
name: ~
|
||||
# -- Specify the data source type for the backing image used in Longhorn StorageClass.
|
||||
# If the backing image does not exists, Longhorn will use this field to create a backing image. Otherwise, Longhorn will use it to verify the selected backing image.
|
||||
dataSourceType: ~
|
||||
# -- Specify the data source parameters for the backing image used in Longhorn StorageClass. This option accepts a json string of a map. e.g., `'{\"url\":\"https://backing-image-example.s3-region.amazonaws.com/test-backing-image\"}'`.
|
||||
dataSourceParameters: ~
|
||||
# -- Specify the expected SHA512 checksum of the selected backing image in Longhorn StorageClass
|
||||
expectedChecksum: ~
|
||||
defaultNodeSelector:
|
||||
# -- Enable Node selector for Longhorn StorageClass
|
||||
enable: false
|
||||
# -- This selector enables only certain nodes having these tags to be used for the volume. e.g. `"storage,fast"`
|
||||
selector: ""
|
||||
# -- Allow automatically removing snapshots during filesystem trim for Longhorn StorageClass. Options: `ignored`, `enabled`, `disabled`
|
||||
removeSnapshotsDuringFilesystemTrim: ignored
|
||||
|
||||
helmPreUpgradeCheckerJob:
|
||||
enabled: true
|
||||
|
||||
csi:
|
||||
# -- Specify kubelet root-dir. Leave blank to autodetect
|
||||
kubeletRootDir: ~
|
||||
# -- Specify replica count of CSI Attacher. Leave blank to use default count: 3
|
||||
attacherReplicaCount: ~
|
||||
# -- Specify replica count of CSI Provisioner. Leave blank to use default count: 3
|
||||
provisionerReplicaCount: ~
|
||||
# -- Specify replica count of CSI Resizer. Leave blank to use default count: 3
|
||||
resizerReplicaCount: ~
|
||||
# -- Specify replica count of CSI Snapshotter. Leave blank to use default count: 3
|
||||
snapshotterReplicaCount: ~
|
||||
|
||||
defaultSettings:
|
||||
# -- The endpoint used to access the backupstore. Available: NFS, CIFS, AWS, GCP, AZURE.
|
||||
backupTarget: ~
|
||||
# -- The name of the Kubernetes secret associated with the backup target.
|
||||
backupTargetCredentialSecret: ~
|
||||
# -- If this setting is enabled, Longhorn will automatically attaches the volume and takes snapshot/backup
|
||||
# when it is the time to do recurring snapshot/backup.
|
||||
allowRecurringJobWhileVolumeDetached: ~
|
||||
# -- Create default Disk automatically only on Nodes with the label "node.longhorn.io/create-default-disk=true" if no other disks exist.
|
||||
# If disabled, the default disk will be created on all new nodes when each node is first added.
|
||||
createDefaultDiskLabeledNodes: ~
|
||||
# -- Default path to use for storing data on a host. By default "/var/lib/longhorn/"
|
||||
defaultDataPath: ~
|
||||
# -- Longhorn volume has data locality if there is a local replica of the volume on the same node as the pod which is using the volume.
|
||||
defaultDataLocality: ~
|
||||
# -- Allow scheduling on nodes with existing healthy replicas of the same volume. By default false.
|
||||
replicaSoftAntiAffinity: ~
|
||||
# -- Enable this setting automatically rebalances replicas when discovered an available node.
|
||||
replicaAutoBalance: ~
|
||||
# -- The over-provisioning percentage defines how much storage can be allocated relative to the hard drive's capacity. By default 200.
|
||||
storageOverProvisioningPercentage: ~
|
||||
# -- If the minimum available disk capacity exceeds the actual percentage of available disk capacity,
|
||||
# the disk becomes unschedulable until more space is freed up. By default 25.
|
||||
storageMinimalAvailablePercentage: ~
|
||||
# -- The reserved percentage specifies the percentage of disk space that will not be allocated to the default disk on each new Longhorn node.
|
||||
storageReservedPercentageForDefaultDisk: ~
|
||||
# -- Upgrade Checker will check for new Longhorn version periodically.
|
||||
# When there is a new version available, a notification will appear in the UI. By default true.
|
||||
upgradeChecker: ~
|
||||
# -- The default number of replicas when a volume is created from the Longhorn UI.
|
||||
# For Kubernetes configuration, update the `numberOfReplicas` in the StorageClass. By default 3.
|
||||
defaultReplicaCount: ~
|
||||
# -- The 'storageClassName' is given to PVs and PVCs that are created for an existing Longhorn volume. The StorageClass name can also be used as a label,
|
||||
# so it is possible to use a Longhorn StorageClass to bind a workload to an existing PV without creating a Kubernetes StorageClass object.
|
||||
# By default 'longhorn-static'.
|
||||
defaultLonghornStaticStorageClass: ~
|
||||
# -- In seconds. The backupstore poll interval determines how often Longhorn checks the backupstore for new backups.
|
||||
# Set to 0 to disable the polling. By default 300.
|
||||
backupstorePollInterval: ~
|
||||
# -- In minutes. This setting determines how long Longhorn will keep the backup resource that was failed. Set to 0 to disable the auto-deletion.
|
||||
failedBackupTTL: ~
|
||||
# -- Restore recurring jobs from the backup volume on the backup target and create recurring jobs if not exist during a backup restoration.
|
||||
restoreVolumeRecurringJobs: ~
|
||||
# -- This setting specifies how many successful backup or snapshot job histories should be retained. History will not be retained if the value is 0.
|
||||
recurringSuccessfulJobsHistoryLimit: ~
|
||||
# -- This setting specifies how many failed backup or snapshot job histories should be retained. History will not be retained if the value is 0.
|
||||
recurringFailedJobsHistoryLimit: ~
|
||||
# -- This setting specifies how many failed support bundles can exist in the cluster.
|
||||
# Set this value to **0** to have Longhorn automatically purge all failed support bundles.
|
||||
supportBundleFailedHistoryLimit: ~
|
||||
# -- taintToleration for longhorn system components
|
||||
taintToleration: ~
|
||||
# -- nodeSelector for longhorn system components
|
||||
systemManagedComponentsNodeSelector: ~
|
||||
# -- priorityClass for longhorn system componentss
|
||||
priorityClass: ~
|
||||
# -- If enabled, volumes will be automatically salvaged when all the replicas become faulty e.g. due to network disconnection.
|
||||
# Longhorn will try to figure out which replica(s) are usable, then use them for the volume. By default true.
|
||||
autoSalvage: ~
|
||||
# -- If enabled, Longhorn will automatically delete the workload pod that is managed by a controller (e.g. deployment, statefulset, daemonset, etc...)
|
||||
# when Longhorn volume is detached unexpectedly (e.g. during Kubernetes upgrade, Docker reboot, or network disconnect).
|
||||
# By deleting the pod, its controller restarts the pod and Kubernetes handles volume reattachment and remount.
|
||||
autoDeletePodWhenVolumeDetachedUnexpectedly: ~
|
||||
# -- Disable Longhorn manager to schedule replica on Kubernetes cordoned node. By default true.
|
||||
disableSchedulingOnCordonedNode: ~
|
||||
# -- Allow scheduling new Replicas of Volume to the Nodes in the same Zone as existing healthy Replicas.
|
||||
# Nodes don't belong to any Zone will be treated as in the same Zone.
|
||||
# Notice that Longhorn relies on label `topology.kubernetes.io/zone=<Zone name of the node>` in the Kubernetes node object to identify the zone.
|
||||
# By default true.
|
||||
replicaZoneSoftAntiAffinity: ~
|
||||
# -- Allow scheduling on disks with existing healthy replicas of the same volume. By default true.
|
||||
replicaDiskSoftAntiAffinity: ~
|
||||
# -- Defines the Longhorn action when a Volume is stuck with a StatefulSet/Deployment Pod on a node that is down.
|
||||
nodeDownPodDeletionPolicy: ~
|
||||
allowNodeDrainWithLastHealthyReplica: ~
|
||||
mkfsExt4Parameters: ~
|
||||
disableReplicaRebuild: ~
|
||||
# -- Define the policy to use when a node with the last healthy replica of a volume is drained.
|
||||
nodeDrainPolicy: ~
|
||||
# -- In seconds. The interval determines how long Longhorn will wait at least in order to reuse the existing data on a failed replica
|
||||
# rather than directly creating a new replica for a degraded volume.
|
||||
replicaReplenishmentWaitInterval: ~
|
||||
# -- This setting controls how many replicas on a node can be rebuilt simultaneously.
|
||||
concurrentReplicaRebuildPerNodeLimit: ~
|
||||
# -- This setting controls how many volumes on a node can restore the backup concurrently. Set the value to **0** to disable backup restore.
|
||||
concurrentVolumeBackupRestorePerNodeLimit: ~
|
||||
# -- This setting is only for volumes created by UI.
|
||||
# By default, this is false meaning there will be a reivision counter file to track every write to the volume.
|
||||
# During salvage recovering Longhorn will pick the replica with largest reivision counter as candidate to recover the whole volume.
|
||||
# If revision counter is disabled, Longhorn will not track every write to the volume.
|
||||
# During the salvage recovering, Longhorn will use the 'volume-head-xxx.img' file last modification time and
|
||||
# file size to pick the replica candidate to recover the whole volume.
|
||||
disableRevisionCounter: ~
|
||||
# -- This setting defines the Image Pull Policy of Longhorn system managed pod.
|
||||
# e.g. instance manager, engine image, CSI driver, etc.
|
||||
# The new Image Pull Policy will only apply after the system managed pods restart.
|
||||
systemManagedPodsImagePullPolicy: ~
|
||||
# -- This setting allows user to create and attach a volume that doesn't have all the replicas scheduled at the time of creation.
|
||||
allowVolumeCreationWithDegradedAvailability: ~
|
||||
# -- This setting enables Longhorn to automatically cleanup the system generated snapshot after replica rebuild is done.
|
||||
autoCleanupSystemGeneratedSnapshot: ~
|
||||
# -- This setting controls how Longhorn automatically upgrades volumes' engines to the new default engine image after upgrading Longhorn manager.
|
||||
# The value of this setting specifies the maximum number of engines per node that are allowed to upgrade to the default engine image at the same time.
|
||||
# If the value is 0, Longhorn will not automatically upgrade volumes' engines to default version.
|
||||
concurrentAutomaticEngineUpgradePerNodeLimit: ~
|
||||
# -- This interval in minutes determines how long Longhorn will wait before cleaning up the backing image file when there is no replica in the disk using it.
|
||||
backingImageCleanupWaitInterval: ~
|
||||
# -- This interval in seconds determines how long Longhorn will wait before re-downloading the backing image file
|
||||
# when all disk files of this backing image become failed or unknown.
|
||||
backingImageRecoveryWaitInterval: ~
|
||||
guaranteedEngineManagerCPU: ~
|
||||
guaranteedReplicaManagerCPU: ~
|
||||
# -- This integer value indicates how many percentage of the total allocatable CPU on each node will be reserved for each instance manager Pod.
|
||||
# You can leave it with the default value, which is 12%.
|
||||
guaranteedInstanceManagerCPU: ~
|
||||
# -- Enabling this setting will notify Longhorn that the cluster is using Kubernetes Cluster Autoscaler.
|
||||
kubernetesClusterAutoscalerEnabled: ~
|
||||
# -- This setting allows Longhorn to delete the orphan resource and its corresponding orphaned data automatically like stale replicas.
|
||||
# Orphan resources on down or unknown nodes will not be cleaned up automatically.
|
||||
orphanAutoDeletion: ~
|
||||
# -- Longhorn uses the storage network for in-cluster data traffic. Leave this blank to use the Kubernetes cluster network.
|
||||
storageNetwork: ~
|
||||
# -- This flag is designed to prevent Longhorn from being accidentally uninstalled which will lead to data lost.
|
||||
deletingConfirmationFlag: ~
|
||||
# -- In seconds. The setting specifies the timeout between the engine and replica(s), and the value should be between 8 to 30 seconds.
|
||||
# The default value is 8 seconds.
|
||||
engineReplicaTimeout: ~
|
||||
# -- This setting allows users to enable or disable snapshot hashing and data integrity checking.
|
||||
snapshotDataIntegrity: ~
|
||||
# -- Hashing snapshot disk files impacts the performance of the system.
|
||||
# The immediate snapshot hashing and checking can be disabled to minimize the impact after creating a snapshot.
|
||||
snapshotDataIntegrityImmediateCheckAfterSnapshotCreation: ~
|
||||
# -- Unix-cron string format. The setting specifies when Longhorn checks the data integrity of snapshot disk files.
|
||||
snapshotDataIntegrityCronjob: ~
|
||||
# -- This setting allows Longhorn filesystem trim feature to automatically mark the latest snapshot and
|
||||
# its ancestors as removed and stops at the snapshot containing multiple children.
|
||||
removeSnapshotsDuringFilesystemTrim: ~
|
||||
# -- This feature supports the fast replica rebuilding.
|
||||
# It relies on the checksum of snapshot disk files, so setting the snapshot-data-integrity to **enable** or **fast-check** is a prerequisite.
|
||||
fastReplicaRebuildEnabled: ~
|
||||
# -- In seconds. The setting specifies the HTTP client timeout to the file sync server.
|
||||
replicaFileSyncHttpClientTimeout: ~
|
||||
# -- The log level Panic, Fatal, Error, Warn, Info, Debug, Trace used in longhorn manager. Default to Info.
|
||||
logLevel: ~
|
||||
# -- This setting allows users to specify backup compression method.
|
||||
backupCompressionMethod: ~
|
||||
# -- This setting controls how many worker threads per backup concurrently.
|
||||
backupConcurrentLimit: ~
|
||||
# -- This setting controls how many worker threads per restore concurrently.
|
||||
restoreConcurrentLimit: ~
|
||||
# -- This allows users to activate v2 data engine based on SPDK.
|
||||
# Currently, it is in the preview phase and should not be utilized in a production environment.
|
||||
v2DataEngine: ~
|
||||
# -- This setting allows users to enable the offline replica rebuilding for volumes using v2 data engine.
|
||||
offlineReplicaRebuilding: ~
|
||||
# -- Allow Scheduling Empty Node Selector Volumes To Any Node
|
||||
allowEmptyNodeSelectorVolume: ~
|
||||
# -- Allow Scheduling Empty Disk Selector Volumes To Any Disk
|
||||
allowEmptyDiskSelectorVolume: ~
|
||||
|
||||
privateRegistry:
|
||||
# -- Set `true` to create a new private registry secret
|
||||
createSecret: ~
|
||||
# -- URL of private registry. Leave blank to apply system default registry
|
||||
registryUrl: ~
|
||||
# -- User used to authenticate to private registry
|
||||
registryUser: ~
|
||||
# -- Password used to authenticate to private registry
|
||||
registryPasswd: ~
|
||||
# -- If create a new private registry secret is true, create a Kubernetes secret with this name; else use the existing secret of this name. Use it to pull images from your private registry
|
||||
registrySecret: ~
|
||||
|
||||
longhornManager:
|
||||
log:
|
||||
# -- Options: `plain`, `json`
|
||||
format: plain
|
||||
# -- Priority class for longhorn manager
|
||||
priorityClass: ~
|
||||
# -- Tolerate nodes to run Longhorn manager
|
||||
tolerations: []
|
||||
## If you want to set tolerations for Longhorn Manager DaemonSet, delete the `[]` in the line above
|
||||
## and uncomment this example block
|
||||
@ -125,14 +354,23 @@ longhornManager:
|
||||
# operator: "Equal"
|
||||
# value: "value"
|
||||
# effect: "NoSchedule"
|
||||
# -- Select nodes to run Longhorn manager
|
||||
nodeSelector: {}
|
||||
## If you want to set node selector for Longhorn Manager DaemonSet, delete the `{}` in the line above
|
||||
## and uncomment this example block
|
||||
# label-key1: "label-value1"
|
||||
# label-key2: "label-value2"
|
||||
# -- Annotation used in Longhorn manager service
|
||||
serviceAnnotations: {}
|
||||
## If you want to set annotations for the Longhorn Manager service, delete the `{}` in the line above
|
||||
## and uncomment this example block
|
||||
# annotation-key1: "annotation-value1"
|
||||
# annotation-key2: "annotation-value2"
|
||||
|
||||
longhornDriver:
|
||||
# -- Priority class for longhorn driver
|
||||
priorityClass: ~
|
||||
# -- Tolerate nodes to run Longhorn driver
|
||||
tolerations: []
|
||||
## If you want to set tolerations for Longhorn Driver Deployer Deployment, delete the `[]` in the line above
|
||||
## and uncomment this example block
|
||||
@ -140,6 +378,7 @@ longhornDriver:
|
||||
# operator: "Equal"
|
||||
# value: "value"
|
||||
# effect: "NoSchedule"
|
||||
# -- Select nodes to run Longhorn driver
|
||||
nodeSelector: {}
|
||||
## If you want to set node selector for Longhorn Driver Deployer Deployment, delete the `{}` in the line above
|
||||
## and uncomment this example block
|
||||
@ -147,7 +386,11 @@ longhornDriver:
|
||||
# label-key2: "label-value2"
|
||||
|
||||
longhornUI:
|
||||
# -- Replica count for longhorn ui
|
||||
replicas: 2
|
||||
# -- Priority class count for longhorn ui
|
||||
priorityClass: ~
|
||||
# -- Tolerate nodes to run Longhorn UI
|
||||
tolerations: []
|
||||
## If you want to set tolerations for Longhorn UI Deployment, delete the `[]` in the line above
|
||||
## and uncomment this example block
|
||||
@ -155,43 +398,37 @@ longhornUI:
|
||||
# operator: "Equal"
|
||||
# value: "value"
|
||||
# effect: "NoSchedule"
|
||||
# -- Select nodes to run Longhorn UI
|
||||
nodeSelector: {}
|
||||
## If you want to set node selector for Longhorn UI Deployment, delete the `{}` in the line above
|
||||
## and uncomment this example block
|
||||
# label-key1: "label-value1"
|
||||
# label-key2: "label-value2"
|
||||
|
||||
resources: {}
|
||||
# We usually recommend not to specify default resources and to leave this as a conscious
|
||||
# choice for the user. This also increases chances charts run on environments with little
|
||||
# resources, such as Minikube. If you do want to specify resources, uncomment the following
|
||||
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
|
||||
# limits:
|
||||
# cpu: 100m
|
||||
# memory: 128Mi
|
||||
# requests:
|
||||
# cpu: 100m
|
||||
# memory: 128Mi
|
||||
#
|
||||
|
||||
ingress:
|
||||
## Set to true to enable ingress record generation
|
||||
# -- Set to true to enable ingress record generation
|
||||
enabled: false
|
||||
|
||||
## Add ingressClassName to the Ingress
|
||||
## Can replace the kubernetes.io/ingress.class annotation on v1.18+
|
||||
# -- Add ingressClassName to the Ingress
|
||||
# Can replace the kubernetes.io/ingress.class annotation on v1.18+
|
||||
ingressClassName: ~
|
||||
|
||||
# -- Layer 7 Load Balancer hostname
|
||||
host: sslip.io
|
||||
|
||||
## Set this to true in order to enable TLS on the ingress record
|
||||
## A side effect of this will be that the backend service will be connected at port 443
|
||||
# -- Set this to true in order to enable TLS on the ingress record
|
||||
tls: false
|
||||
|
||||
## If TLS is set to true, you must declare what secret will store the key/certificate for TLS
|
||||
# -- Enable this in order to enable that the backend service will be connected at port 443
|
||||
secureBackends: false
|
||||
|
||||
# -- If TLS is set to true, you must declare what secret will store the key/certificate for TLS
|
||||
tlsSecret: longhorn.local-tls
|
||||
|
||||
## Ingress annotations done as key:value pairs
|
||||
# -- If ingress is enabled you can set the default ingress path
|
||||
# then you can access the UI by using the following full path {{host}}+{{path}}
|
||||
path: /
|
||||
|
||||
## If you're using kube-lego, you will want to add:
|
||||
## kubernetes.io/tls-acme: true
|
||||
##
|
||||
@ -199,10 +436,12 @@ ingress:
|
||||
## ref: https://github.com/kubernetes/ingress-nginx/blob/master/docs/annotations.md
|
||||
##
|
||||
## If tls is set to true, annotation ingress.kubernetes.io/secure-backends: "true" will automatically be set
|
||||
# -- Ingress annotations done as key:value pairs
|
||||
annotations:
|
||||
# kubernetes.io/ingress.class: nginx
|
||||
# kubernetes.io/tls-acme: true
|
||||
|
||||
# -- If you're providing your own certificates, please use this to add the certificates as secrets
|
||||
secrets:
|
||||
## If you're providing your own certificates, please use this to add the certificates as secrets
|
||||
## key and certificate should start with -----BEGIN CERTIFICATE----- or
|
||||
@ -217,12 +456,25 @@ ingress:
|
||||
# key:
|
||||
# certificate:
|
||||
|
||||
# Configure a pod security policy in the Longhorn namespace to allow privileged pods
|
||||
enablePSP: true
|
||||
# -- For Kubernetes < v1.25, if your cluster enables Pod Security Policy admission controller,
|
||||
# set this to `true` to ship longhorn-psp which allow privileged Longhorn pods to start
|
||||
enablePSP: false
|
||||
|
||||
## Specify override namespace, specifically this is useful for using longhorn as sub-chart
|
||||
## and its release namespace is not the `longhorn-system`
|
||||
namespaceOverride: ""
|
||||
|
||||
# Annotations to add to the Longhorn Manager DaemonSet Pods. Optional.
|
||||
# -- Annotations to add to the Longhorn Manager DaemonSet Pods. Optional.
|
||||
annotations: {}
|
||||
|
||||
serviceAccount:
|
||||
# -- Annotations to add to the service account
|
||||
annotations: {}
|
||||
|
||||
## openshift settings
|
||||
openshift:
|
||||
# -- Enable when using openshift
|
||||
enabled: false
|
||||
ui:
|
||||
# -- UI route in openshift environment
|
||||
route: "longhorn-ui"
|
||||
# -- UI port in openshift environment
|
||||
port: 443
|
||||
# -- UI proxy in openshift environment
|
||||
proxy: 8443
|
||||
|
48
deploy/backupstores/azurite-backupstore.yaml
Normal file
48
deploy/backupstores/azurite-backupstore.yaml
Normal file
@ -0,0 +1,48 @@
|
||||
# same secret for longhorn-system namespace
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: azblob-secret
|
||||
namespace: longhorn-system
|
||||
type: Opaque
|
||||
data:
|
||||
AZBLOB_ACCOUNT_NAME: ZGV2c3RvcmVhY2NvdW50MQ==
|
||||
AZBLOB_ACCOUNT_KEY: RWJ5OHZkTTAyeE5PY3FGbHFVd0pQTGxtRXRsQ0RYSjFPVXpGVDUwdVNSWjZJRnN1RnEyVVZFckN6NEk2dHEvSzFTWkZQVE90ci9LQkhCZWtzb0dNR3c9PQ==
|
||||
AZBLOB_ENDPOINT: aHR0cDovL2F6YmxvYi1zZXJ2aWNlLmRlZmF1bHQ6MTAwMDAv
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: longhorn-test-azblob
|
||||
namespace: default
|
||||
labels:
|
||||
app: longhorn-test-azblob
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: longhorn-test-azblob
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: longhorn-test-azblob
|
||||
spec:
|
||||
containers:
|
||||
- name: azurite
|
||||
image: mcr.microsoft.com/azure-storage/azurite:3.23.0
|
||||
ports:
|
||||
- containerPort: 10000
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: azblob-service
|
||||
namespace: default
|
||||
spec:
|
||||
selector:
|
||||
app: longhorn-test-azblob
|
||||
ports:
|
||||
- port: 10000
|
||||
targetPort: 10000
|
||||
protocol: TCP
|
||||
sessionAffinity: ClientIP
|
87
deploy/backupstores/cifs-backupstore.yaml
Normal file
87
deploy/backupstores/cifs-backupstore.yaml
Normal file
@ -0,0 +1,87 @@
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: cifs-secret
|
||||
namespace: longhorn-system
|
||||
type: Opaque
|
||||
data:
|
||||
CIFS_USERNAME: bG9uZ2hvcm4tY2lmcy11c2VybmFtZQ== # longhorn-cifs-username
|
||||
CIFS_PASSWORD: bG9uZ2hvcm4tY2lmcy1wYXNzd29yZA== # longhorn-cifs-password
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: cifs-secret
|
||||
namespace: default
|
||||
type: Opaque
|
||||
data:
|
||||
CIFS_USERNAME: bG9uZ2hvcm4tY2lmcy11c2VybmFtZQ== # longhorn-cifs-username
|
||||
CIFS_PASSWORD: bG9uZ2hvcm4tY2lmcy1wYXNzd29yZA== # longhorn-cifs-password
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: longhorn-test-cifs
|
||||
namespace: default
|
||||
labels:
|
||||
app: longhorn-test-cifs
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: longhorn-test-cifs
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: longhorn-test-cifs
|
||||
spec:
|
||||
volumes:
|
||||
- name: cifs-volume
|
||||
emptyDir: {}
|
||||
containers:
|
||||
- name: longhorn-test-cifs-container
|
||||
image: derekbit/samba:latest
|
||||
ports:
|
||||
- containerPort: 139
|
||||
- containerPort: 445
|
||||
imagePullPolicy: Always
|
||||
env:
|
||||
- name: EXPORT_PATH
|
||||
value: /opt/backupstore
|
||||
- name: CIFS_DISK_IMAGE_SIZE_MB
|
||||
value: "4096"
|
||||
- name: CIFS_USERNAME
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: cifs-secret
|
||||
key: CIFS_USERNAME
|
||||
- name: CIFS_PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: cifs-secret
|
||||
key: CIFS_PASSWORD
|
||||
securityContext:
|
||||
privileged: true
|
||||
capabilities:
|
||||
add: ["SYS_ADMIN", "DAC_READ_SEARCH"]
|
||||
volumeMounts:
|
||||
- name: cifs-volume
|
||||
mountPath: "/opt/backupstore"
|
||||
args: ["-u", "$(CIFS_USERNAME);$(CIFS_PASSWORD)", "-s", "backupstore;$(EXPORT_PATH);yes;no;no;all;none"]
|
||||
---
|
||||
kind: Service
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: longhorn-test-cifs-svc
|
||||
namespace: default
|
||||
spec:
|
||||
selector:
|
||||
app: longhorn-test-cifs
|
||||
clusterIP: None
|
||||
ports:
|
||||
- name: netbios-port
|
||||
port: 139
|
||||
targetPort: 139
|
||||
- name: microsoft-port
|
||||
port: 445
|
||||
targetPort: 445
|
@ -24,14 +24,23 @@ data:
|
||||
AWS_ENDPOINTS: aHR0cHM6Ly9taW5pby1zZXJ2aWNlLmRlZmF1bHQ6OTAwMA== # https://minio-service.default:9000
|
||||
AWS_CERT: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURMRENDQWhTZ0F3SUJBZ0lSQU1kbzQycGhUZXlrMTcvYkxyWjVZRHN3RFFZSktvWklodmNOQVFFTEJRQXcKR2pFWU1CWUdBMVVFQ2hNUFRHOXVaMmh2Y200Z0xTQlVaWE4wTUNBWERUSXdNRFF5TnpJek1EQXhNVm9ZRHpJeApNakF3TkRBek1qTXdNREV4V2pBYU1SZ3dGZ1lEVlFRS0V3OU1iMjVuYUc5eWJpQXRJRlJsYzNRd2dnRWlNQTBHCkNTcUdTSWIzRFFFQkFRVUFBNElCRHdBd2dnRUtBb0lCQVFEWHpVdXJnUFpEZ3pUM0RZdWFlYmdld3Fvd2RlQUQKODRWWWF6ZlN1USs3K21Oa2lpUVBvelVVMmZvUWFGL1BxekJiUW1lZ29hT3l5NVhqM1VFeG1GcmV0eDBaRjVOVgpKTi85ZWFJNWRXRk9teHhpMElPUGI2T0RpbE1qcXVEbUVPSXljdjRTaCsvSWo5Zk1nS0tXUDdJZGxDNUJPeThkCncwOVdkckxxaE9WY3BKamNxYjN6K3hISHd5Q05YeGhoRm9tb2xQVnpJbnlUUEJTZkRuSDBuS0lHUXl2bGhCMGsKVHBHSzYxc2prZnFTK3hpNTlJeHVrbHZIRXNQcjFXblRzYU9oaVh6N3lQSlorcTNBMWZoVzBVa1JaRFlnWnNFbQovZ05KM3JwOFhZdURna2kzZ0UrOElXQWRBWHExeWhqRDdSSkI4VFNJYTV0SGpKUUtqZ0NlSG5HekFnTUJBQUdqCmF6QnBNQTRHQTFVZER3RUIvd1FFQXdJQ3BEQVRCZ05WSFNVRUREQUtCZ2dyQmdFRkJRY0RBVEFQQmdOVkhSTUIKQWY4RUJUQURBUUgvTURFR0ExVWRFUVFxTUNpQ0NXeHZZMkZzYUc5emRJSVZiV2x1YVc4dGMyVnlkbWxqWlM1awpaV1poZFd4MGh3Ui9BQUFCTUEwR0NTcUdTSWIzRFFFQkN3VUFBNElCQVFDbUZMMzlNSHVZMzFhMTFEajRwMjVjCnFQRUM0RHZJUWozTk9kU0dWMmQrZjZzZ3pGejFXTDhWcnF2QjFCMVM2cjRKYjJQRXVJQkQ4NFlwVXJIT1JNU2MKd3ViTEppSEtEa0Jmb2U5QWI1cC9VakpyS0tuajM0RGx2c1cvR3AwWTZYc1BWaVdpVWorb1JLbUdWSTI0Q0JIdgpnK0JtVzNDeU5RR1RLajk0eE02czNBV2xHRW95YXFXUGU1eHllVWUzZjFBWkY5N3RDaklKUmVWbENtaENGK0JtCmFUY1RSUWN3cVdvQ3AwYmJZcHlERFlwUmxxOEdQbElFOW8yWjZBc05mTHJVcGFtZ3FYMmtYa2gxa3lzSlEralAKelFadHJSMG1tdHVyM0RuRW0yYmk0TktIQVFIcFc5TXUxNkdRakUxTmJYcVF0VEI4OGpLNzZjdEg5MzRDYWw2VgotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0t
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: longhorn-test-minio
|
||||
namespace: default
|
||||
labels:
|
||||
app: longhorn-test-minio
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: longhorn-test-minio
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: longhorn-test-minio
|
||||
spec:
|
||||
volumes:
|
||||
- name: minio-volume
|
||||
emptyDir: {}
|
||||
@ -43,18 +52,17 @@ spec:
|
||||
path: public.crt
|
||||
- key: AWS_CERT_KEY
|
||||
path: private.key
|
||||
|
||||
containers:
|
||||
- name: minio
|
||||
image: longhornio/minio:RELEASE.2020-10-18T21-54-12Z
|
||||
command: ["sh", "-c", "mkdir -p /storage/backupbucket && mkdir -p /root/.minio/certs && ln -s /root/certs/private.key /root/.minio/certs/private.key && ln -s /root/certs/public.crt /root/.minio/certs/public.crt && exec /usr/bin/minio server /storage"]
|
||||
image: minio/minio:RELEASE.2022-02-01T18-00-14Z
|
||||
command: ["sh", "-c", "mkdir -p /storage/backupbucket && mkdir -p /root/.minio/certs && ln -s /root/certs/private.key /root/.minio/certs/private.key && ln -s /root/certs/public.crt /root/.minio/certs/public.crt && exec minio server /storage"]
|
||||
env:
|
||||
- name: MINIO_ACCESS_KEY
|
||||
- name: MINIO_ROOT_USER
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: minio-secret
|
||||
key: AWS_ACCESS_KEY_ID
|
||||
- name: MINIO_SECRET_KEY
|
||||
- name: MINIO_ROOT_PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: minio-secret
|
||||
|
@ -1,11 +1,19 @@
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: longhorn-test-nfs
|
||||
namespace: default
|
||||
labels:
|
||||
app: longhorn-test-nfs
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: longhorn-test-nfs
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: longhorn-test-nfs
|
||||
spec:
|
||||
volumes:
|
||||
- name: nfs-volume
|
||||
emptyDir: {}
|
||||
|
@ -1,11 +1,13 @@
|
||||
longhornio/csi-attacher:v3.2.1
|
||||
longhornio/csi-provisioner:v2.1.2
|
||||
longhornio/csi-resizer:v1.2.0
|
||||
longhornio/csi-snapshotter:v3.0.3
|
||||
longhornio/csi-node-driver-registrar:v2.3.0
|
||||
longhornio/backing-image-manager:v2_20210820
|
||||
longhornio/longhorn-engine:v1.2.3
|
||||
longhornio/longhorn-instance-manager:v1_20211210
|
||||
longhornio/longhorn-manager:v1.2.3
|
||||
longhornio/longhorn-share-manager:v1_20211020
|
||||
longhornio/longhorn-ui:v1.2.3
|
||||
longhornio/csi-attacher:v4.2.0
|
||||
longhornio/csi-provisioner:v3.4.1
|
||||
longhornio/csi-resizer:v1.7.0
|
||||
longhornio/csi-snapshotter:v6.2.1
|
||||
longhornio/csi-node-driver-registrar:v2.7.0
|
||||
longhornio/livenessprobe:v2.9.0
|
||||
longhornio/backing-image-manager:master-head
|
||||
longhornio/longhorn-engine:master-head
|
||||
longhornio/longhorn-instance-manager:master-head
|
||||
longhornio/longhorn-manager:master-head
|
||||
longhornio/longhorn-share-manager:master-head
|
||||
longhornio/longhorn-ui:master-head
|
||||
longhornio/support-bundle-kit:v0.0.27
|
||||
|
5262
deploy/longhorn.yaml
5262
deploy/longhorn.yaml
File diff suppressed because it is too large
Load Diff
61
deploy/podsecuritypolicy.yaml
Normal file
61
deploy/podsecuritypolicy.yaml
Normal file
@ -0,0 +1,61 @@
|
||||
apiVersion: policy/v1beta1
|
||||
kind: PodSecurityPolicy
|
||||
metadata:
|
||||
name: longhorn-psp
|
||||
spec:
|
||||
privileged: true
|
||||
allowPrivilegeEscalation: true
|
||||
requiredDropCapabilities:
|
||||
- NET_RAW
|
||||
allowedCapabilities:
|
||||
- SYS_ADMIN
|
||||
hostNetwork: false
|
||||
hostIPC: false
|
||||
hostPID: true
|
||||
runAsUser:
|
||||
rule: RunAsAny
|
||||
seLinux:
|
||||
rule: RunAsAny
|
||||
fsGroup:
|
||||
rule: RunAsAny
|
||||
supplementalGroups:
|
||||
rule: RunAsAny
|
||||
volumes:
|
||||
- configMap
|
||||
- downwardAPI
|
||||
- emptyDir
|
||||
- secret
|
||||
- projected
|
||||
- hostPath
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: Role
|
||||
metadata:
|
||||
name: longhorn-psp-role
|
||||
namespace: longhorn-system
|
||||
rules:
|
||||
- apiGroups:
|
||||
- policy
|
||||
resources:
|
||||
- podsecuritypolicies
|
||||
verbs:
|
||||
- use
|
||||
resourceNames:
|
||||
- longhorn-psp
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: longhorn-psp-binding
|
||||
namespace: longhorn-system
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: Role
|
||||
name: longhorn-psp-role
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: longhorn-service-account
|
||||
namespace: longhorn-system
|
||||
- kind: ServiceAccount
|
||||
name: default
|
||||
namespace: longhorn-system
|
36
deploy/prerequisite/longhorn-cifs-installation.yaml
Normal file
36
deploy/prerequisite/longhorn-cifs-installation.yaml
Normal file
@ -0,0 +1,36 @@
|
||||
apiVersion: apps/v1
|
||||
kind: DaemonSet
|
||||
metadata:
|
||||
name: longhorn-cifs-installation
|
||||
labels:
|
||||
app: longhorn-cifs-installation
|
||||
annotations:
|
||||
command: &cmd OS=$(grep -E "^ID_LIKE=" /etc/os-release | cut -d '=' -f 2); if [[ -z "${OS}" ]]; then OS=$(grep -E "^ID=" /etc/os-release | cut -d '=' -f 2); fi; if [[ "${OS}" == *"debian"* ]]; then sudo apt-get update -q -y && sudo apt-get install -q -y cifs-utils; elif [[ "${OS}" == *"suse"* ]]; then sudo zypper --gpg-auto-import-keys -q refresh && sudo zypper --gpg-auto-import-keys -q install -y cifs-utils; else sudo yum makecache -q -y && sudo yum --setopt=tsflags=noscripts install -q -y cifs-utils; fi && if [ $? -eq 0 ]; then echo "cifs install successfully"; else echo "cifs utilities install failed error code $?"; fi
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: longhorn-cifs-installation
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: longhorn-cifs-installation
|
||||
spec:
|
||||
hostNetwork: true
|
||||
hostPID: true
|
||||
initContainers:
|
||||
- name: cifs-installation
|
||||
command:
|
||||
- nsenter
|
||||
- --mount=/proc/1/ns/mnt
|
||||
- --
|
||||
- bash
|
||||
- -c
|
||||
- *cmd
|
||||
image: alpine:3.12
|
||||
securityContext:
|
||||
privileged: true
|
||||
containers:
|
||||
- name: sleep
|
||||
image: registry.k8s.io/pause:3.1
|
||||
updateStrategy:
|
||||
type: RollingUpdate
|
@ -5,7 +5,7 @@ metadata:
|
||||
labels:
|
||||
app: longhorn-iscsi-installation
|
||||
annotations:
|
||||
command: &cmd OS=$(grep -E "^ID_LIKE=" /etc/os-release | cut -d '=' -f 2); if [[ -z "${OS}" ]]; then OS=$(grep -E "^ID=" /etc/os-release | cut -d '=' -f 2); fi; if [[ "${OS}" == *"debian"* ]]; then sudo apt-get update -q -y && sudo apt-get install -q -y open-iscsi && sudo systemctl -q enable iscsid && sudo systemctl start iscsid; elif [[ "${OS}" == *"suse"* ]]; then sudo zypper --gpg-auto-import-keys -q refresh && sudo zypper --gpg-auto-import-keys -q install -y open-iscsi && sudo systemctl -q enable iscsid && sudo systemctl start iscsid; else sudo yum makecache -q -y && sudo yum --setopt=tsflags=noscripts install -q -y iscsi-initiator-utils && echo "InitiatorName=$(/sbin/iscsi-iname)" > /etc/iscsi/initiatorname.iscsi && sudo systemctl -q enable iscsid && sudo systemctl start iscsid; fi && if [ $? -eq 0 ]; then echo "iscsi install successfully"; else echo "iscsi install failed error code $?"; fi
|
||||
command: &cmd OS=$(grep -E "^ID_LIKE=" /etc/os-release | cut -d '=' -f 2); if [[ -z "${OS}" ]]; then OS=$(grep -E "^ID=" /etc/os-release | cut -d '=' -f 2); fi; if [[ "${OS}" == *"debian"* ]]; then sudo apt-get update -q -y && sudo apt-get install -q -y open-iscsi && sudo systemctl -q enable iscsid && sudo systemctl start iscsid && sudo modprobe iscsi_tcp; elif [[ "${OS}" == *"suse"* ]]; then sudo zypper --gpg-auto-import-keys -q refresh && sudo zypper --gpg-auto-import-keys -q install -y open-iscsi && sudo systemctl -q enable iscsid && sudo systemctl start iscsid && sudo modprobe iscsi_tcp; else sudo yum makecache -q -y && sudo yum --setopt=tsflags=noscripts install -q -y iscsi-initiator-utils && echo "InitiatorName=$(/sbin/iscsi-iname)" > /etc/iscsi/initiatorname.iscsi && sudo systemctl -q enable iscsid && sudo systemctl start iscsid && sudo modprobe iscsi_tcp; fi && if [ $? -eq 0 ]; then echo "iscsi install successfully"; else echo "iscsi install failed error code $?"; fi
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
@ -26,11 +26,11 @@ spec:
|
||||
- bash
|
||||
- -c
|
||||
- *cmd
|
||||
image: alpine:3.12
|
||||
image: alpine:3.17
|
||||
securityContext:
|
||||
privileged: true
|
||||
containers:
|
||||
- name: sleep
|
||||
image: k8s.gcr.io/pause:3.1
|
||||
image: registry.k8s.io/pause:3.1
|
||||
updateStrategy:
|
||||
type: RollingUpdate
|
||||
|
35
deploy/prerequisite/longhorn-iscsi-selinux-workaround.yaml
Normal file
35
deploy/prerequisite/longhorn-iscsi-selinux-workaround.yaml
Normal file
@ -0,0 +1,35 @@
|
||||
apiVersion: apps/v1
|
||||
kind: DaemonSet
|
||||
metadata:
|
||||
name: longhorn-iscsi-selinux-workaround
|
||||
labels:
|
||||
app: longhorn-iscsi-selinux-workaround
|
||||
annotations:
|
||||
command: &cmd if ! rpm -q policycoreutils > /dev/null 2>&1; then echo "failed to apply workaround; only applicable in Fedora based distros with SELinux enabled"; exit; elif cd /tmp && echo '(allow iscsid_t self (capability (dac_override)))' > local_longhorn.cil && semodule -vi local_longhorn.cil && rm -f local_longhorn.cil; then echo "applied workaround successfully"; else echo "failed to apply workaround; error code $?"; fi
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: longhorn-iscsi-selinux-workaround
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: longhorn-iscsi-selinux-workaround
|
||||
spec:
|
||||
hostPID: true
|
||||
initContainers:
|
||||
- name: iscsi-selinux-workaround
|
||||
command:
|
||||
- nsenter
|
||||
- --mount=/proc/1/ns/mnt
|
||||
- --
|
||||
- bash
|
||||
- -c
|
||||
- *cmd
|
||||
image: alpine:3.17
|
||||
securityContext:
|
||||
privileged: true
|
||||
containers:
|
||||
- name: sleep
|
||||
image: registry.k8s.io/pause:3.1
|
||||
updateStrategy:
|
||||
type: RollingUpdate
|
@ -5,7 +5,7 @@ metadata:
|
||||
labels:
|
||||
app: longhorn-nfs-installation
|
||||
annotations:
|
||||
command: &cmd OS=$(grep -E "^ID_LIKE=" /etc/os-release | cut -d '=' -f 2); if [[ -z "${OS}" ]]; then OS=$(grep -E "^ID=" /etc/os-release | cut -d '=' -f 2); fi; if [[ "${OS}" == *"debian"* ]]; then sudo apt-get update -q -y && sudo apt-get install -q -y nfs-common; elif [[ "${OS}" == *"suse"* ]]; then sudo zypper --gpg-auto-import-keys -q refresh && sudo zypper --gpg-auto-import-keys -q install -y nfs-client; else sudo yum makecache -q -y && sudo yum --setopt=tsflags=noscripts install -q -y nfs-utils; fi && if [ $? -eq 0 ]; then echo "nfs install successfully"; else echo "nfs install failed error code $?"; fi
|
||||
command: &cmd OS=$(grep -E "^ID_LIKE=" /etc/os-release | cut -d '=' -f 2); if [[ -z "${OS}" ]]; then OS=$(grep -E "^ID=" /etc/os-release | cut -d '=' -f 2); fi; if [[ "${OS}" == *"debian"* ]]; then sudo apt-get update -q -y && sudo apt-get install -q -y nfs-common && sudo modprobe nfs; elif [[ "${OS}" == *"suse"* ]]; then sudo zypper --gpg-auto-import-keys -q refresh && sudo zypper --gpg-auto-import-keys -q install -y nfs-client && sudo modprobe nfs; else sudo yum makecache -q -y && sudo yum --setopt=tsflags=noscripts install -q -y nfs-utils && sudo modprobe nfs; fi && if [ $? -eq 0 ]; then echo "nfs install successfully"; else echo "nfs install failed error code $?"; fi
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
@ -31,6 +31,6 @@ spec:
|
||||
privileged: true
|
||||
containers:
|
||||
- name: sleep
|
||||
image: k8s.gcr.io/pause:3.1
|
||||
image: registry.k8s.io/pause:3.1
|
||||
updateStrategy:
|
||||
type: RollingUpdate
|
||||
|
36
deploy/prerequisite/longhorn-nvme-cli-installation.yaml
Normal file
36
deploy/prerequisite/longhorn-nvme-cli-installation.yaml
Normal file
@ -0,0 +1,36 @@
|
||||
apiVersion: apps/v1
|
||||
kind: DaemonSet
|
||||
metadata:
|
||||
name: longhorn-nvme-cli-installation
|
||||
labels:
|
||||
app: longhorn-nvme-cli-installation
|
||||
annotations:
|
||||
command: &cmd OS=$(grep -E "^ID_LIKE=" /etc/os-release | cut -d '=' -f 2); if [[ -z "${OS}" ]]; then OS=$(grep -E "^ID=" /etc/os-release | cut -d '=' -f 2); fi; if [[ "${OS}" == *"debian"* ]]; then sudo apt-get update -q -y && sudo apt-get install -q -y nvme-cli && sudo modprobe nvme-tcp; elif [[ "${OS}" == *"suse"* ]]; then sudo zypper --gpg-auto-import-keys -q refresh && sudo zypper --gpg-auto-import-keys -q install -y nvme-cli && sudo modprobe nvme-tcp; else sudo yum makecache -q -y && sudo yum --setopt=tsflags=noscripts install -q -y nvme-cli && sudo modprobe nvme-tcp; fi && if [ $? -eq 0 ]; then echo "nvme-cli install successfully"; else echo "nvme-cli install failed error code $?"; fi
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: longhorn-nvme-cli-installation
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: longhorn-nvme-cli-installation
|
||||
spec:
|
||||
hostNetwork: true
|
||||
hostPID: true
|
||||
initContainers:
|
||||
- name: nvme-cli-installation
|
||||
command:
|
||||
- nsenter
|
||||
- --mount=/proc/1/ns/mnt
|
||||
- --
|
||||
- bash
|
||||
- -c
|
||||
- *cmd
|
||||
image: alpine:3.12
|
||||
securityContext:
|
||||
privileged: true
|
||||
containers:
|
||||
- name: sleep
|
||||
image: registry.k8s.io/pause:3.1
|
||||
updateStrategy:
|
||||
type: RollingUpdate
|
47
deploy/prerequisite/longhorn-spdk-setup.yaml
Normal file
47
deploy/prerequisite/longhorn-spdk-setup.yaml
Normal file
@ -0,0 +1,47 @@
|
||||
apiVersion: apps/v1
|
||||
kind: DaemonSet
|
||||
metadata:
|
||||
name: longhorn-spdk-setup
|
||||
labels:
|
||||
app: longhorn-spdk-setup
|
||||
annotations:
|
||||
command: &cmd OS=$(grep -E "^ID_LIKE=" /etc/os-release | cut -d '=' -f 2); if [[ -z "${OS}" ]]; then OS=$(grep -E "^ID=" /etc/os-release | cut -d '=' -f 2); fi; if [[ "${OS}" == *"debian"* ]]; then sudo apt-get update -q -y && sudo apt-get install -q -y git; elif [[ "${OS}" == *"suse"* ]]; then sudo zypper --gpg-auto-import-keys -q refresh && sudo zypper --gpg-auto-import-keys -q install -y git; else sudo yum makecache -q -y && sudo yum --setopt=tsflags=noscripts install -q -y git; fi && if [ $? -eq 0 ]; then echo "git install successfully"; else echo "git install failed error code $?"; fi && rm -rf ${SPDK_DIR}; git clone -b longhorn https://github.com/longhorn/spdk.git ${SPDK_DIR} && bash ${SPDK_DIR}/scripts/setup.sh ${SPDK_OPTION}; if [ $? -eq 0 ]; then echo "vm.nr_hugepages=$((HUGEMEM/2))" >> /etc/sysctl.conf; echo "SPDK environment is configured successfully"; else echo "Failed to configure SPDK environment error code $?"; fi; rm -rf ${SPDK_DIR}
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: longhorn-spdk-setup
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: longhorn-spdk-setup
|
||||
spec:
|
||||
hostNetwork: true
|
||||
hostPID: true
|
||||
initContainers:
|
||||
- name: longhorn-spdk-setup
|
||||
command:
|
||||
- nsenter
|
||||
- --mount=/proc/1/ns/mnt
|
||||
- --
|
||||
- bash
|
||||
- -c
|
||||
- *cmd
|
||||
image: alpine:3.12
|
||||
env:
|
||||
- name: SPDK_DIR
|
||||
value: "/tmp/spdk"
|
||||
- name: SPDK_OPTION
|
||||
value: ""
|
||||
- name: HUGEMEM
|
||||
value: "1024"
|
||||
- name: PCI_ALLOWED
|
||||
value: "none"
|
||||
- name: DRIVER_OVERRIDE
|
||||
value: "uio_pci_generic"
|
||||
securityContext:
|
||||
privileged: true
|
||||
containers:
|
||||
- name: sleep
|
||||
image: registry.k8s.io/pause:3.1
|
||||
updateStrategy:
|
||||
type: RollingUpdate
|
7
deploy/upgrade_responder_server/README.md
Normal file
7
deploy/upgrade_responder_server/README.md
Normal file
@ -0,0 +1,7 @@
|
||||
# Upgrade Responder Helm Chart
|
||||
|
||||
This directory contains the helm values for the Longhorn upgrade responder server.
|
||||
The values are in the file `./chart-values.yaml`.
|
||||
When you update the content of `./chart-values.yaml`, automation pipeline will update the Longhorn upgrade responder.
|
||||
Information about the source chart is in `chart.yaml`.
|
||||
See [dev/upgrade-responder](../../dev/upgrade-responder/README.md) for manual deployment steps.
|
372
deploy/upgrade_responder_server/chart-values.yaml
Normal file
372
deploy/upgrade_responder_server/chart-values.yaml
Normal file
@ -0,0 +1,372 @@
|
||||
# Specify the name of the application that is using this Upgrade Responder server
|
||||
# This will be used to create a database named <application-name>_upgrade_responder
|
||||
# in the InfluxDB to store all data for this Upgrade Responder
|
||||
# The name must be in snake case format
|
||||
applicationName: longhorn
|
||||
|
||||
image:
|
||||
repository: longhornio/upgrade-responder
|
||||
tag: longhorn-head
|
||||
pullPolicy: Always
|
||||
|
||||
secret:
|
||||
name: upgrade-responder-secret
|
||||
# Set this to false if you don't want to manage these secrets with helm
|
||||
managed: false
|
||||
|
||||
resources:
|
||||
limits:
|
||||
cpu: 400m
|
||||
memory: 512Mi
|
||||
requests:
|
||||
cpu: 200m
|
||||
memory: 256Mi
|
||||
|
||||
# This configmap contains information about the latest release
|
||||
# of the application that is using this Upgrade Responder
|
||||
configMap:
|
||||
responseConfig: |-
|
||||
{
|
||||
"versions": [
|
||||
{
|
||||
"name": "v1.3.3",
|
||||
"releaseDate": "2023-04-19T00:00:00Z",
|
||||
"tags": [
|
||||
"stable"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "v1.4.3",
|
||||
"releaseDate": "2023-07-14T00:00:00Z",
|
||||
"tags": [
|
||||
"latest",
|
||||
"stable"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "v1.5.1",
|
||||
"releaseDate": "2023-07-19T00:00:00Z",
|
||||
"tags": [
|
||||
"latest"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
requestSchema: |-
|
||||
{
|
||||
"appVersionSchema": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"extraTagInfoSchema": {
|
||||
"hostKernelRelease": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"hostOsDistro": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"kubernetesNodeProvider": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"kubernetesVersion": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"longhornSettingAllowRecurringJobWhileVolumeDetached": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"longhornSettingAllowVolumeCreationWithDegradedAvailability": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"longhornSettingAutoCleanupSystemGeneratedSnapshot": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"longhornSettingAutoDeletePodWhenVolumeDetachedUnexpectedly": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"longhornSettingAutoSalvage": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"longhornSettingBackupCompressionMethod": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"longhornSettingBackupTarget": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"longhornSettingCrdApiVersion": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"longhornSettingCreateDefaultDiskLabeledNodes": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"longhornSettingDefaultDataLocality": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"longhornSettingDisableRevisionCounter": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"longhornSettingDisableSchedulingOnCordonedNode": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"longhornSettingFastReplicaRebuildEnabled": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"longhornSettingKubernetesClusterAutoscalerEnabled": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"longhornSettingNodeDownPodDeletionPolicy": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"longhornSettingNodeDrainPolicy": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"longhornSettingOfflineReplicaRebuilding": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"longhornSettingOrphanAutoDeletion": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"longhornSettingPriorityClass": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"longhornSettingRegistrySecret": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"longhornSettingRemoveSnapshotsDuringFilesystemTrim": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"longhornSettingReplicaAutoBalance": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"longhornSettingReplicaSoftAntiAffinity": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"longhornSettingReplicaZoneSoftAntiAffinity": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"longhornSettingReplicaDiskSoftAntiAffinity": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
}
|
||||
"longhornSettingRestoreVolumeRecurringJobs": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"longhornSettingSnapshotDataIntegrity": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"longhornSettingSnapshotDataIntegrityCronjob": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"longhornSettingSnapshotDataIntegrityImmediateCheckAfterSnapshotCreation": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"longhornSettingStorageNetwork": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"longhornSettingSystemManagedComponentsNodeSelector": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"longhornSettingSystemManagedPodsImagePullPolicy": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"longhornSettingTaintToleration": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"longhornSettingV2DataEngine": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
}
|
||||
},
|
||||
"extraFieldInfoSchema": {
|
||||
"longhornInstanceManagerAverageCpuUsageMilliCores": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornInstanceManagerAverageMemoryUsageBytes": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornManagerAverageCpuUsageMilliCores": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornManagerAverageMemoryUsageBytes": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornNamespaceUid": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"longhornNodeCount": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornNodeDiskHDDCount": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornNodeDiskNVMeCount": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornNodeDiskSSDCount": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornSettingBackingImageCleanupWaitInterval": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornSettingBackingImageRecoveryWaitInterval": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornSettingBackupConcurrentLimit": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornSettingBackupstorePollInterval": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornSettingConcurrentAutomaticEngineUpgradePerNodeLimit": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornSettingConcurrentReplicaRebuildPerNodeLimit": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornSettingConcurrentVolumeBackupRestorePerNodeLimit": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornSettingDefaultReplicaCount": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornSettingEngineReplicaTimeout": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornSettingFailedBackupTtl": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornSettingGuaranteedInstanceManagerCpu": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornSettingRecurringFailedJobsHistoryLimit": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornSettingRecurringSuccessfulJobsHistoryLimit": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornSettingReplicaFileSyncHttpClientTimeout": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornSettingReplicaReplenishmentWaitInterval": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornSettingRestoreConcurrentLimit": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornSettingStorageMinimalAvailablePercentage": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornSettingStorageOverProvisioningPercentage": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornSettingStorageReservedPercentageForDefaultDisk": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornSettingSupportBundleFailedHistoryLimit": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornVolumeAccessModeRwoCount": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornVolumeAccessModeRwxCount": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornVolumeAccessModeUnknownCount": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornVolumeAverageActualSizeBytes": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornVolumeAverageNumberOfReplicas": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornVolumeAverageSizeBytes": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornVolumeAverageSnapshotCount": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornVolumeDataLocalityBestEffortCount": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornVolumeDataLocalityDisabledCount": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornVolumeDataLocalityStrictLocalCount": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornVolumeFrontendBlockdevCount": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornVolumeFrontendIscsiCount": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornVolumeOfflineReplicaRebuildingDisabledCount": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornVolumeOfflineReplicaRebuildingEnabledCount": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornVolumeReplicaAutoBalanceDisabledCount": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornVolumeReplicaSoftAntiAffinityFalseCount": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornVolumeReplicaZoneSoftAntiAffinityTrueCount": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornVolumeReplicaDiskSoftAntiAffinityTrueCount": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornVolumeRestoreVolumeRecurringJobFalseCount": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornVolumeSnapshotDataIntegrityDisabledCount": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornVolumeSnapshotDataIntegrityFastCheckCount": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornVolumeUnmapMarkSnapChainRemovedFalseCount": {
|
||||
"dataType": "float"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
5
deploy/upgrade_responder_server/chart.yaml
Normal file
5
deploy/upgrade_responder_server/chart.yaml
Normal file
@ -0,0 +1,5 @@
|
||||
url: https://github.com/longhorn/upgrade-responder.git
|
||||
commit: 116f807836c29185038cfb005708f0a8d41f4d35
|
||||
releaseName: longhorn-upgrade-responder
|
||||
namespace: longhorn-upgrade-responder
|
||||
|
55
dev/upgrade-responder/README.md
Normal file
55
dev/upgrade-responder/README.md
Normal file
@ -0,0 +1,55 @@
|
||||
## Overview
|
||||
|
||||
### Install
|
||||
|
||||
1. Install Longhorn.
|
||||
1. Install Longhorn [upgrade-responder](https://github.com/longhorn/upgrade-responder) stack.
|
||||
```bash
|
||||
./install.sh
|
||||
```
|
||||
Sample output:
|
||||
```shell
|
||||
secret/influxdb-creds created
|
||||
persistentvolumeclaim/influxdb created
|
||||
deployment.apps/influxdb created
|
||||
service/influxdb created
|
||||
Deployment influxdb is running.
|
||||
Cloning into 'upgrade-responder'...
|
||||
remote: Enumerating objects: 1077, done.
|
||||
remote: Counting objects: 100% (1076/1076), done.
|
||||
remote: Compressing objects: 100% (454/454), done.
|
||||
remote: Total 1077 (delta 573), reused 1049 (delta 565), pack-reused 1
|
||||
Receiving objects: 100% (1077/1077), 55.01 MiB | 18.10 MiB/s, done.
|
||||
Resolving deltas: 100% (573/573), done.
|
||||
Release "longhorn-upgrade-responder" does not exist. Installing it now.
|
||||
NAME: longhorn-upgrade-responder
|
||||
LAST DEPLOYED: Thu May 11 00:42:44 2023
|
||||
NAMESPACE: default
|
||||
STATUS: deployed
|
||||
REVISION: 1
|
||||
TEST SUITE: None
|
||||
NOTES:
|
||||
1. Get the Upgrade Responder server URL by running these commands:
|
||||
export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=upgrade-responder,app.kubernetes.io/instance=longhorn-upgrade-responder" -o jsonpath="{.items[0].metadata.name}")
|
||||
kubectl port-forward $POD_NAME 8080:8314 --namespace default
|
||||
echo "Upgrade Responder server URL is http://127.0.0.1:8080"
|
||||
Deployment longhorn-upgrade-responder is running.
|
||||
persistentvolumeclaim/grafana-pvc created
|
||||
deployment.apps/grafana created
|
||||
service/grafana created
|
||||
Deployment grafana is running.
|
||||
|
||||
[Upgrade Checker]
|
||||
URL : http://longhorn-upgrade-responder.default.svc.cluster.local:8314/v1/checkupgrade
|
||||
|
||||
[InfluxDB]
|
||||
URL : http://influxdb.default.svc.cluster.local:8086
|
||||
Database : longhorn_upgrade_responder
|
||||
Username : root
|
||||
Password : root
|
||||
|
||||
[Grafana]
|
||||
Dashboard : http://1.2.3.4:30864
|
||||
Username : admin
|
||||
Password : admin
|
||||
```
|
424
dev/upgrade-responder/install.sh
Executable file
424
dev/upgrade-responder/install.sh
Executable file
@ -0,0 +1,424 @@
|
||||
#!/bin/bash
|
||||
|
||||
UPGRADE_RESPONDER_REPO="https://github.com/longhorn/upgrade-responder.git"
|
||||
UPGRADE_RESPONDER_REPO_BRANCH="master"
|
||||
UPGRADE_RESPONDER_VALUE_YAML="upgrade-responder-value.yaml"
|
||||
UPGRADE_RESPONDER_IMAGE_REPO="longhornio/upgrade-responder"
|
||||
UPGRADE_RESPONDER_IMAGE_TAG="master-head"
|
||||
|
||||
INFLUXDB_URL="http://influxdb.default.svc.cluster.local:8086"
|
||||
|
||||
APP_NAME="longhorn"
|
||||
|
||||
DEPLOYMENT_TIMEOUT_SEC=300
|
||||
DEPLOYMENT_WAIT_INTERVAL_SEC=5
|
||||
|
||||
temp_dir=$(mktemp -d)
|
||||
trap 'rm -rf "${temp_dir}"' EXIT # -f because packed Git files (.pack, .idx) are write protected.
|
||||
|
||||
cp -a ./* ${temp_dir}
|
||||
cd ${temp_dir}
|
||||
|
||||
wait_for_deployment() {
|
||||
local deployment_name="$1"
|
||||
local start_time=$(date +%s)
|
||||
|
||||
while true; do
|
||||
status=$(kubectl rollout status deployment/${deployment_name})
|
||||
if [[ ${status} == *"successfully rolled out"* ]]; then
|
||||
echo "Deployment ${deployment_name} is running."
|
||||
break
|
||||
fi
|
||||
|
||||
elapsed_secs=$(($(date +%s) - ${start_time}))
|
||||
if [[ ${elapsed_secs} -ge ${timeout_secs} ]]; then
|
||||
echo "Timed out waiting for deployment ${deployment_name} to be running."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Deployment ${deployment_name} is not running yet. Waiting..."
|
||||
sleep ${DEPLOYMENT_WAIT_INTERVAL_SEC}
|
||||
done
|
||||
}
|
||||
|
||||
install_influxdb() {
|
||||
kubectl apply -f ./manifests/influxdb.yaml
|
||||
wait_for_deployment "influxdb"
|
||||
}
|
||||
|
||||
install_grafana() {
|
||||
kubectl apply -f ./manifests/grafana.yaml
|
||||
wait_for_deployment "grafana"
|
||||
}
|
||||
|
||||
install_upgrade_responder() {
|
||||
cat << EOF > ${UPGRADE_RESPONDER_VALUE_YAML}
|
||||
applicationName: ${APP_NAME}
|
||||
secret:
|
||||
name: upgrade-responder-secrets
|
||||
managed: true
|
||||
influxDBUrl: "${INFLUXDB_URL}"
|
||||
influxDBUser: "root"
|
||||
influxDBPassword: "root"
|
||||
configMap:
|
||||
responseConfig: |-
|
||||
{
|
||||
"versions": [{
|
||||
"name": "v1.0.0",
|
||||
"releaseDate": "2020-05-18T12:30:00Z",
|
||||
"tags": ["latest"]
|
||||
}]
|
||||
}
|
||||
requestSchema: |-
|
||||
{
|
||||
"appVersionSchema": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"extraTagInfoSchema": {
|
||||
"hostKernelRelease": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"hostOsDistro": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"kubernetesNodeProvider": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"kubernetesVersion": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"longhornSettingAllowRecurringJobWhileVolumeDetached": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"longhornSettingAllowVolumeCreationWithDegradedAvailability": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"longhornSettingAutoCleanupSystemGeneratedSnapshot": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"longhornSettingAutoDeletePodWhenVolumeDetachedUnexpectedly": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"longhornSettingAutoSalvage": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"longhornSettingBackupCompressionMethod": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"longhornSettingBackupTarget": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"longhornSettingCrdApiVersion": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"longhornSettingCreateDefaultDiskLabeledNodes": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"longhornSettingDefaultDataLocality": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"longhornSettingDisableRevisionCounter": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"longhornSettingDisableSchedulingOnCordonedNode": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"longhornSettingFastReplicaRebuildEnabled": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"longhornSettingKubernetesClusterAutoscalerEnabled": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"longhornSettingNodeDownPodDeletionPolicy": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"longhornSettingNodeDrainPolicy": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"longhornSettingOfflineReplicaRebuilding": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"longhornSettingOrphanAutoDeletion": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"longhornSettingPriorityClass": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"longhornSettingRegistrySecret": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"longhornSettingRemoveSnapshotsDuringFilesystemTrim": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"longhornSettingReplicaAutoBalance": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"longhornSettingReplicaSoftAntiAffinity": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"longhornSettingReplicaZoneSoftAntiAffinity": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"longhornSettingReplicaDiskSoftAntiAffinity": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"longhornSettingRestoreVolumeRecurringJobs": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"longhornSettingSnapshotDataIntegrity": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"longhornSettingSnapshotDataIntegrityCronjob": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"longhornSettingSnapshotDataIntegrityImmediateCheckAfterSnapshotCreation": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"longhornSettingStorageNetwork": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"longhornSettingSystemManagedComponentsNodeSelector": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"longhornSettingSystemManagedPodsImagePullPolicy": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"longhornSettingTaintToleration": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"longhornSettingV2DataEngine": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
}
|
||||
},
|
||||
"extraFieldInfoSchema": {
|
||||
"longhornInstanceManagerAverageCpuUsageMilliCores": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornInstanceManagerAverageMemoryUsageBytes": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornManagerAverageCpuUsageMilliCores": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornManagerAverageMemoryUsageBytes": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornNamespaceUid": {
|
||||
"dataType": "string",
|
||||
"maxLen": 200
|
||||
},
|
||||
"longhornNodeCount": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornNodeDiskHDDCount": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornNodeDiskNVMeCount": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornNodeDiskSSDCount": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornSettingBackingImageCleanupWaitInterval": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornSettingBackingImageRecoveryWaitInterval": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornSettingBackupConcurrentLimit": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornSettingBackupstorePollInterval": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornSettingConcurrentAutomaticEngineUpgradePerNodeLimit": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornSettingConcurrentReplicaRebuildPerNodeLimit": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornSettingConcurrentVolumeBackupRestorePerNodeLimit": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornSettingDefaultReplicaCount": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornSettingEngineReplicaTimeout": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornSettingFailedBackupTtl": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornSettingGuaranteedInstanceManagerCpu": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornSettingRecurringFailedJobsHistoryLimit": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornSettingRecurringSuccessfulJobsHistoryLimit": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornSettingReplicaFileSyncHttpClientTimeout": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornSettingReplicaReplenishmentWaitInterval": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornSettingRestoreConcurrentLimit": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornSettingStorageMinimalAvailablePercentage": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornSettingStorageOverProvisioningPercentage": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornSettingStorageReservedPercentageForDefaultDisk": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornSettingSupportBundleFailedHistoryLimit": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornVolumeAccessModeRwoCount": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornVolumeAccessModeRwxCount": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornVolumeAccessModeUnknownCount": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornVolumeAverageActualSizeBytes": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornVolumeAverageNumberOfReplicas": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornVolumeAverageSizeBytes": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornVolumeAverageSnapshotCount": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornVolumeDataLocalityBestEffortCount": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornVolumeDataLocalityDisabledCount": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornVolumeDataLocalityStrictLocalCount": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornVolumeFrontendBlockdevCount": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornVolumeFrontendIscsiCount": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornVolumeOfflineReplicaRebuildingDisabledCount": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornVolumeOfflineReplicaRebuildingEnabledCount": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornVolumeReplicaAutoBalanceDisabledCount": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornVolumeReplicaSoftAntiAffinityFalseCount": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornVolumeReplicaZoneSoftAntiAffinityTrueCount": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornVolumeReplicaDiskSoftAntiAffinityTrueCount": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornVolumeRestoreVolumeRecurringJobFalseCount": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornVolumeSnapshotDataIntegrityDisabledCount": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornVolumeSnapshotDataIntegrityFastCheckCount": {
|
||||
"dataType": "float"
|
||||
},
|
||||
"longhornVolumeUnmapMarkSnapChainRemovedFalseCount": {
|
||||
"dataType": "float"
|
||||
}
|
||||
}
|
||||
}
|
||||
image:
|
||||
repository: ${UPGRADE_RESPONDER_IMAGE_REPO}
|
||||
tag: ${UPGRADE_RESPONDER_IMAGE_TAG}
|
||||
EOF
|
||||
|
||||
git clone -b ${UPGRADE_RESPONDER_REPO_BRANCH} ${UPGRADE_RESPONDER_REPO}
|
||||
helm upgrade --install ${APP_NAME}-upgrade-responder upgrade-responder/chart -f ${UPGRADE_RESPONDER_VALUE_YAML}
|
||||
wait_for_deployment "${APP_NAME}-upgrade-responder"
|
||||
}
|
||||
|
||||
output() {
|
||||
local upgrade_responder_service_info=$(kubectl get svc/${APP_NAME}-upgrade-responder --no-headers)
|
||||
local upgrade_responder_service_port=$(echo "${upgrade_responder_service_info}" | awk '{print $5}' | cut -d'/' -f1)
|
||||
echo # a blank line to separate the installation outputs for better readability.
|
||||
printf "[Upgrade Checker]\n"
|
||||
printf "%-10s: http://${APP_NAME}-upgrade-responder.default.svc.cluster.local:${upgrade_responder_service_port}/v1/checkupgrade\n\n" "URL"
|
||||
|
||||
printf "[InfluxDB]\n"
|
||||
printf "%-10s: ${INFLUXDB_URL}\n" "URL"
|
||||
printf "%-10s: ${APP_NAME}_upgrade_responder\n" "Database"
|
||||
printf "%-10s: root\n" "Username"
|
||||
printf "%-10s: root\n\n" "Password"
|
||||
|
||||
local public_ip=$(curl -s https://ifconfig.me/ip)
|
||||
local grafana_service_info=$(kubectl get svc/grafana --no-headers)
|
||||
local grafana_service_port=$(echo "${grafana_service_info}" | awk '{print $5}' | cut -d':' -f2 | cut -d'/' -f1)
|
||||
printf "[Grafana]\n"
|
||||
printf "%-10s: http://${public_ip}:${grafana_service_port}\n" "Dashboard"
|
||||
printf "%-10s: admin\n" "Username"
|
||||
printf "%-10s: admin\n" "Password"
|
||||
}
|
||||
|
||||
install_influxdb
|
||||
install_upgrade_responder
|
||||
install_grafana
|
||||
output
|
86
dev/upgrade-responder/manifests/grafana.yaml
Normal file
86
dev/upgrade-responder/manifests/grafana.yaml
Normal file
@ -0,0 +1,86 @@
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: grafana-pvc
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
storageClassName: longhorn
|
||||
resources:
|
||||
requests:
|
||||
storage: 2Gi
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
labels:
|
||||
app: grafana
|
||||
name: grafana
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: grafana
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: grafana
|
||||
spec:
|
||||
securityContext:
|
||||
fsGroup: 472
|
||||
supplementalGroups:
|
||||
- 0
|
||||
containers:
|
||||
- name: grafana
|
||||
image: grafana/grafana:7.1.0
|
||||
imagePullPolicy: IfNotPresent
|
||||
env:
|
||||
- name: GF_INSTALL_PLUGINS
|
||||
value: "grafana-worldmap-panel"
|
||||
ports:
|
||||
- containerPort: 3000
|
||||
name: http-grafana
|
||||
protocol: TCP
|
||||
readinessProbe:
|
||||
failureThreshold: 3
|
||||
httpGet:
|
||||
path: /robots.txt
|
||||
port: 3000
|
||||
scheme: HTTP
|
||||
initialDelaySeconds: 10
|
||||
periodSeconds: 30
|
||||
successThreshold: 1
|
||||
timeoutSeconds: 2
|
||||
livenessProbe:
|
||||
failureThreshold: 3
|
||||
initialDelaySeconds: 30
|
||||
periodSeconds: 10
|
||||
successThreshold: 1
|
||||
tcpSocket:
|
||||
port: 3000
|
||||
timeoutSeconds: 1
|
||||
resources:
|
||||
requests:
|
||||
cpu: 250m
|
||||
memory: 750Mi
|
||||
volumeMounts:
|
||||
- mountPath: /var/lib/grafana
|
||||
name: grafana-pv
|
||||
volumes:
|
||||
- name: grafana-pv
|
||||
persistentVolumeClaim:
|
||||
claimName: grafana-pvc
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: grafana
|
||||
spec:
|
||||
ports:
|
||||
- port: 3000
|
||||
protocol: TCP
|
||||
targetPort: http-grafana
|
||||
selector:
|
||||
app: grafana
|
||||
sessionAffinity: None
|
||||
type: LoadBalancer
|
90
dev/upgrade-responder/manifests/influxdb.yaml
Normal file
90
dev/upgrade-responder/manifests/influxdb.yaml
Normal file
@ -0,0 +1,90 @@
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: influxdb-creds
|
||||
namespace: default
|
||||
type: Opaque
|
||||
data:
|
||||
INFLUXDB_HOST: aW5mbHV4ZGI= # influxdb
|
||||
INFLUXDB_PASSWORD: cm9vdA== # root
|
||||
INFLUXDB_USERNAME: cm9vdA== # root
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: influxdb
|
||||
namespace: default
|
||||
labels:
|
||||
app: influxdb
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
storageClassName: longhorn
|
||||
resources:
|
||||
requests:
|
||||
storage: 2Gi
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
labels:
|
||||
app: influxdb
|
||||
name: influxdb
|
||||
namespace: default
|
||||
spec:
|
||||
progressDeadlineSeconds: 600
|
||||
replicas: 1
|
||||
revisionHistoryLimit: 10
|
||||
selector:
|
||||
matchLabels:
|
||||
app: influxdb
|
||||
strategy:
|
||||
rollingUpdate:
|
||||
maxSurge: 25%
|
||||
maxUnavailable: 25%
|
||||
type: RollingUpdate
|
||||
template:
|
||||
metadata:
|
||||
creationTimestamp: null
|
||||
labels:
|
||||
app: influxdb
|
||||
spec:
|
||||
containers:
|
||||
- image: docker.io/influxdb:1.8.10
|
||||
imagePullPolicy: IfNotPresent
|
||||
name: influxdb
|
||||
resources: {}
|
||||
terminationMessagePath: /dev/termination-log
|
||||
terminationMessagePolicy: File
|
||||
envFrom:
|
||||
- secretRef:
|
||||
name: influxdb-creds
|
||||
volumeMounts:
|
||||
- mountPath: /var/lib/influxdb
|
||||
name: var-lib-influxdb
|
||||
volumes:
|
||||
- name: var-lib-influxdb
|
||||
persistentVolumeClaim:
|
||||
claimName: influxdb
|
||||
dnsPolicy: ClusterFirst
|
||||
restartPolicy: Always
|
||||
schedulerName: default-scheduler
|
||||
securityContext: {}
|
||||
terminationGracePeriodSeconds: 30
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
labels:
|
||||
app: influxdb
|
||||
name: influxdb
|
||||
namespace: default
|
||||
spec:
|
||||
ports:
|
||||
- port: 8086
|
||||
protocol: TCP
|
||||
targetPort: 8086
|
||||
selector:
|
||||
app: influxdb
|
||||
sessionAffinity: None
|
||||
type: ClusterIP
|
@ -15,7 +15,7 @@ https://github.com/longhorn/longhorn/issues/972
|
||||
1. Previously Longhorn is using filesystem ID as keys to the map of disks on the node. But we found there is no guarantee that filesystem ID won't change after the node reboots for certain filesystems e.g. XFS.
|
||||
1. We want to enable the ability to configure CRD directly, prepare for the CRD based API access in the future
|
||||
1. We also need to make sure previously implemented safe guards are not impacted by this change:
|
||||
1. If a disk was accidentally umounted on the node, we should detect that and stop replica from scheduling into it.
|
||||
1. If a disk was accidentally unmounted on the node, we should detect that and stop replica from scheduling into it.
|
||||
1. We shouldn't allow user to add two disks pointed to the same filesystem
|
||||
|
||||
### Non-goals
|
||||
|
@ -75,4 +75,4 @@ No special upgrade strategy is necessary. Once the user upgrades to the new vers
|
||||
|
||||
### Notes
|
||||
- There is interest in allowing the user to decide on whether or not to retain the `Persistent Volume` (and possibly `Persistent Volume Claim`) for certain use cases such as restoring from a `Backup`. However, this would require changes to the way `go-rancher` generates the `Go` client that we use so that `Delete` requests against resources are able to take inputs.
|
||||
- In the case that a `Volume` is provisioned from a `Storage Class` (and set to be `Deleted` once the `Persistent Volume Claim` utilizing that `Volume` has been deleted), the `Volume` should still be deleted properly regardless of how the deletion was initiated. If the `Volume` is deleted from the UI, the call that the `Volume Controller` makes to delete the `Persistent Volume` would only trigger one more deletion call from the `CSI` server to delete the `Volume`, which would return successfully and allow the `Persistent Volume` to be deleted and the `Volume` to be deleted as wekk. If the `Volume` is deleted because of the `Persistent Volume Claim`, the `CSI` server would be able to successfully make a `Volume` deletion call before deleting the `Persistent Volume`. The `Volume Controller` would have no additional resources to delete and be able to finish deletion of the `Volume`.
|
||||
- In the case that a `Volume` is provisioned from a `Storage Class` (and set to be `Deleted` once the `Persistent Volume Claim` utilizing that `Volume` has been deleted), the `Volume` should still be deleted properly regardless of how the deletion was initiated. If the `Volume` is deleted from the UI, the call that the `Volume Controller` makes to delete the `Persistent Volume` would only trigger one more deletion call from the `CSI` server to delete the `Volume`, which would return successfully and allow the `Persistent Volume` to be deleted and the `Volume` to be deleted as well. If the `Volume` is deleted because of the `Persistent Volume Claim`, the `CSI` server would be able to successfully make a `Volume` deletion call before deleting the `Persistent Volume`. The `Volume Controller` would have no additional resources to delete and be able to finish deletion of the `Volume`.
|
||||
|
@ -16,7 +16,7 @@ https://github.com/longhorn/longhorn/issues/298
|
||||
|
||||
## Proposal
|
||||
1. Add `Eviction Requested` with `true` and `false` selection buttons for disks and nodes. This is for user to evict or cancel the eviction of the disks or the nodes.
|
||||
2. Add new `evictionRequested` field to `Node.Spec`, `Node.Spec.disks` Spec and `Replica.Status`. These will help tracking the request from user and trigger replica controller to update `Replica.Status` and volume controler to do the eviction. And this will reconcile with `scheduledReplica` of selected disks on the nodes.
|
||||
2. Add new `evictionRequested` field to `Node.Spec`, `Node.Spec.disks` Spec and `Replica.Status`. These will help tracking the request from user and trigger replica controller to update `Replica.Status` and volume controller to do the eviction. And this will reconcile with `scheduledReplica` of selected disks on the nodes.
|
||||
3. Display `fail to evict` error message to `Dashboard` and any other eviction errors to the `Event log`.
|
||||
|
||||
### User Stories
|
||||
@ -47,7 +47,7 @@ From an API perspective, the call to set `Eviction Requested` to `true` or `fals
|
||||
### Implementation Overview
|
||||
|
||||
1. On `Longhorn UI` `Node` page, for nodes eviction, adding `Eviction Requested` `true` and `false` options in the `Edit Node` sub-selection, next to `Node Scheduling`. For disks eviction, adding `Eviction Requested` `true` and `false` options in `Edit node and disks` sub-selection under `Operation` column next to each disk `Scheduling` options. This is for user to evict or cancel the eviction of the disks or the nodes.
|
||||
2. Add new `evictionRequested` field to `Node.Spec`, `Node.Spec.disks` Spec and `Replica.Status`. These will help tracking the request from user and trigger replica controller to update `Replica.Status` and volume controler to do the eviction. And this will reconcile with `scheduledReplica` of selected disks on the nodes.
|
||||
2. Add new `evictionRequested` field to `Node.Spec`, `Node.Spec.disks` Spec and `Replica.Status`. These will help tracking the request from user and trigger replica controller to update `Replica.Status` and volume controller to do the eviction. And this will reconcile with `scheduledReplica` of selected disks on the nodes.
|
||||
3. Add a informer in `Replica Controller` to get these information and update `evictionRequested` field in `Replica.Status`.
|
||||
4. Once `Eviction Requested` has been set to `true` for disks or nodes, the `evictionRequested` fields for the disks and nodes will be set to `true` (default is `false`).
|
||||
5. `Replica Controller` will update `evictionRequested` field in `Replica.Status` and `Volume Controller` to get these information from it's replicas.
|
||||
@ -61,7 +61,7 @@ From an API perspective, the call to set `Eviction Requested` to `true` or `fals
|
||||
#### Manual Test Plan For Disks and Nodes Eviction
|
||||
Positive Case:
|
||||
|
||||
For both `Replica Node Level Soft Anti-Affinity` has been enabled and disabled. Also the volume can be 'Attaced' or 'Detached'.
|
||||
For both `Replica Node Level Soft Anti-Affinity` has been enabled and disabled. Also the volume can be 'Attached' or 'Detached'.
|
||||
1. User can select one or more disks or nodes for eviction. Select `Eviction Requested` to `true` on the disabled disks or nodes, Longhorn should start rebuild replicas for the volumes which have replicas on the eviction disks or nodes, and after rebuild success, the replica number on the evicted disks or nodes should be 0. E.g. When there are 3 nodes in the cluster, and with `Replica Node Level Soft Anti-Affinity` is set to `false`, disable one node, and create a volume with replica count 2. And then evict one of them, the eviction should get stuck, then set `Replica Node Level Soft Anti-Affinity` to `true`, the eviction should go through.
|
||||
|
||||
Negative Cases:
|
||||
@ -73,10 +73,10 @@ For `Replica Node Level Soft Anti-Affinity` is enabled, create 2 replicas on the
|
||||
|
||||
For `Replica Node Level Soft Anti-Affinity` is disabled, create 1 replica on a disk, and evict this disk or node, the replica should goto the other disk of node.
|
||||
|
||||
For node eviction, Longhorn will process the evition based on the disks for the node, this is like disk eviction. After eviction success, the replica number on the evicted node should be 0.
|
||||
For node eviction, Longhorn will process the eviction based on the disks for the node, this is like disk eviction. After eviction success, the replica number on the evicted node should be 0.
|
||||
|
||||
#### Error Indication
|
||||
During the eviction, user can click the `Replicas Number` on the `Node` page, and set which replicas are left from eviction, and click the `Replica Name` will redirect user to the `Volume` page to set if there is any error for this volume. If there is any error during the rebuild, Longhorn should display the error message from UI. The error could be `failed to schedule a replica` due to disk space or based on schedule policy, can not find a valid disk to put the relica.
|
||||
During the eviction, user can click the `Replicas Number` on the `Node` page, and set which replicas are left from eviction, and click the `Replica Name` will redirect user to the `Volume` page to set if there is any error for this volume. If there is any error during the rebuild, Longhorn should display the error message from UI. The error could be `failed to schedule a replica` due to disk space or based on schedule policy, can not find a valid disk to put the replica.
|
||||
|
||||
### Upgrade strategy
|
||||
No special upgrade strategy is necessary. Once the user upgrades to the new version of `Longhorn`, these new capabilities will be accessible from the `longhorn-ui` without any special work.
|
||||
|
@ -61,12 +61,12 @@ Same as the Design
|
||||
### Test plan
|
||||
1. Setup a cluster of 3 nodes
|
||||
1. Install Longhorn and set `Default Replica Count = 2` (because we will turn off one node)
|
||||
1. Create a SetfullSet with 2 pods using the command:
|
||||
1. Create a StatefulSet with 2 pods using the command:
|
||||
```
|
||||
kubectl create -f https://raw.githubusercontent.com/longhorn/longhorn/master/examples/statefulset.yaml
|
||||
```
|
||||
1. Create a volume + pv + pvc named `vol1` and create a deployment of default ubuntu named `shell` with the usage of pvc `vol1` mounted under `/mnt/vol1`
|
||||
1. Find the node which contains one pod of the StatefullSet/Deployment. Power off the node
|
||||
1. Find the node which contains one pod of the StatefulSet/Deployment. Power off the node
|
||||
|
||||
#### StatefulSet
|
||||
##### if `NodeDownPodDeletionPolicy ` is set to `do-nothing ` | `delete-deployment-pod`
|
||||
|
@ -119,7 +119,7 @@ UI modification:
|
||||
* On the right volume info panel, add a <div> to display `selectedVolume.dataLocality`
|
||||
* On the right volume panel, in the Health row, add an icon for data locality status.
|
||||
Specifically, if `dataLocality=best-effort` but there is not a local replica then display a warning icon.
|
||||
Similar to the replica node redundancy wanring [here](https://github.com/longhorn/longhorn-ui/blob/0a52c1f0bef172d8ececdf4e1e953bfe78c86f29/src/routes/volume/detail/VolumeInfo.js#L47)
|
||||
Similar to the replica node redundancy warning [here](https://github.com/longhorn/longhorn-ui/blob/0a52c1f0bef172d8ececdf4e1e953bfe78c86f29/src/routes/volume/detail/VolumeInfo.js#L47)
|
||||
* In the volume's actions dropdown, add a new action to update `dataLocality`
|
||||
1. In Rancher UI, add a parameter `dataLocality` when create storage class using Longhorn provisioner.
|
||||
|
||||
|
@ -15,7 +15,7 @@ https://github.com/longhorn/longhorn/issues/508
|
||||
1. By default 'DisableRevisionCounter' is 'false', but Longhorn provides an optional for user to disable it.
|
||||
2. Once user set 'DisableRevisionCounter' to 'true' globally or individually, this will improve Longhorn data path performance.
|
||||
3. And for 'DisableRevisionCounter' is 'true', Longhorn will keep the ability to find the most suitable replica to recover the volume when the engine is faulted(all the replicas are in 'ERR' state).
|
||||
4. Also during Longhorn Engine starting, with head file information it's unlikly to find out out of synced replicas. So will skip the check.
|
||||
4. Also during Longhorn Engine starting, with head file information it's unlikely to find out out of synced replicas. So will skip the check.
|
||||
|
||||
## Proposal
|
||||
|
||||
@ -41,7 +41,7 @@ Or from StorageClass yaml file, user can set 'parameters' 'revisionCounterDisabl
|
||||
|
||||
User can also set 'DisableRevisionCounter' for each individual volumes created by Longhorn UI this individual setting will over write the global setting.
|
||||
|
||||
Once the volume has 'DisableRevisionCounter' to 'true', there won't be revision counter file. And the 'Automatic salvage' is 'true', when the engine is fauled, the engine will pick the most suitable replica as 'Source of Truth' to recover the volume.
|
||||
Once the volume has 'DisableRevisionCounter' to 'true', there won't be revision counter file. And the 'Automatic salvage' is 'true', when the engine is faulted, the engine will pick the most suitable replica as 'Source of Truth' to recover the volume.
|
||||
|
||||
### API changes
|
||||
|
||||
@ -63,12 +63,12 @@ And for the API compatibility issues, always check the 'EngineImage.Statue.cliAP
|
||||
|
||||
1. Add 'Volume.Spec.RevisionCounterDisabled', 'Replica.Spec.RevisionCounterDisabled' and 'Engine.Spec.RevisionCounterDisabled' to volume, replica and engine objects.
|
||||
2. Once 'RevisionCounterDisabled' is 'true', volume controller will set 'Volume.Spec.RevisionCounterDisabled' to true, 'Replica.Spec.RevisionCounterDisabled' and 'Engine.Spec.RevisionCounterDisabled' will set to true. And during 'ReplicaProcessCreate' and 'EngineProcessCreate' , this will be passed to engine replica process and engine controller process to start a replica and controller without revision counter.
|
||||
3. During 'ReplicaProcessCreate' and 'EngineProcessCreate', if 'Replica.Spec.RevisionCounterDisabled' or 'Engine.Spec.RevisionCounterDisabled' is true, it will pass extra parameter to engine replica to start replica without revision counter or to engine controller to start controller without revision counter support, otherwise keep it the same as current and engine replica will use the default value 'false' for this extra paramter. This is the same as the engine controller to set the 'salvageRequested' flag.
|
||||
3. During 'ReplicaProcessCreate' and 'EngineProcessCreate', if 'Replica.Spec.RevisionCounterDisabled' or 'Engine.Spec.RevisionCounterDisabled' is true, it will pass extra parameter to engine replica to start replica without revision counter or to engine controller to start controller without revision counter support, otherwise keep it the same as current and engine replica will use the default value 'false' for this extra parameter. This is the same as the engine controller to set the 'salvageRequested' flag.
|
||||
4. Add 'RevisionCounterDisabled' in 'ReplicaInfo', when engine controller start, it will get all replica information.
|
||||
4. For engine controlloer starting cases:
|
||||
4. For engine controller starting cases:
|
||||
- If revision counter is not disabled, stay with the current logic.
|
||||
- If revision counter is disabled, engine will not check the synchronization of the replicas.
|
||||
- If unexpected case (engine controller has revision counter diabled but any of the replica doesn't, or engine controller has revision counter enabled, but any of the replica doesn't), engine controller will log this as error and mark unmatched replicas to 'ERR'.
|
||||
- If unexpected case (engine controller has revision counter disabled but any of the replica doesn't, or engine controller has revision counter enabled, but any of the replica doesn't), engine controller will log this as error and mark unmatched replicas to 'ERR'.
|
||||
|
||||
#### Add New Logic for Salvage
|
||||
|
||||
|
@ -47,7 +47,7 @@ No API change is required.
|
||||
3. replica eviction happens (volume.Status.Robustness is Healthy)
|
||||
4. there is no potential reusable replica
|
||||
5. there is a potential reusable replica but the replica replenishment wait interval is passed.
|
||||
3. Reuse the failed replica by cleaning up `ReplicaSpec.HealthyAt` and `ReplicaSpec.FailedAt`. And `Replica.Spec.RebuildRetryCount` will be increasd by 1.
|
||||
3. Reuse the failed replica by cleaning up `ReplicaSpec.HealthyAt` and `ReplicaSpec.FailedAt`. And `Replica.Spec.RebuildRetryCount` will be increased by 1.
|
||||
4. Clean up the related record in `Replica.Spec.RebuildRetryCount` when the rebuilding replica becomes mode `RW`.
|
||||
5. Guarantee the reused failed replica will be stopped before re-launching it.
|
||||
|
||||
|
@ -72,7 +72,7 @@ For example, there are many times users ask us for supporting and the problems w
|
||||
If there is a CPU monitoring dashboard for instance managers, those problems can be quickly detected.
|
||||
|
||||
#### Story 2
|
||||
User want to be notified about abnomal event such as disk space limit approaching.
|
||||
User want to be notified about abnormal event such as disk space limit approaching.
|
||||
We can expose metrics provide information about it and user can scrape the metrics and setup alert system.
|
||||
|
||||
### User Experience In Detail
|
||||
@ -82,7 +82,7 @@ Users can use Prometheus or other monitoring systems to collect those metrics by
|
||||
Then, user can display the collected data using tools such as Grafana.
|
||||
User can also setup alert by using tools such as Prometheus Alertmanager.
|
||||
|
||||
Below are the desciptions of metrics which Longhorn exposes and how users can use them:
|
||||
Below are the descriptions of metrics which Longhorn exposes and how users can use them:
|
||||
|
||||
1. longhorn_volume_capacity_bytes
|
||||
|
||||
@ -347,7 +347,7 @@ We add a new end point `/metrics` to exposes all longhorn Prometheus metrics.
|
||||
|
||||
### Implementation Overview
|
||||
We follow the [Prometheus best practice](https://prometheus.io/docs/instrumenting/writing_exporters/#deployment), each Longhorn manager reports information about the components it manages.
|
||||
Prometheus can use service discovery mechanisim to find all longhorn-manager pods in longhorn-backend service.
|
||||
Prometheus can use service discovery mechanism to find all longhorn-manager pods in longhorn-backend service.
|
||||
|
||||
We create a new collector for each type (volumeCollector, backupCollector, nodeCollector, etc..) and have a common baseCollector.
|
||||
This structure is similar to the controller package: we have volumeController, nodeController, etc.. which have a common baseController.
|
||||
|
@ -45,7 +45,7 @@ For part 2, we upgrade engine image for a volume when the following conditions a
|
||||
### User Stories
|
||||
|
||||
Before this enhancement, users have to manually upgrade engine images for volume after upgrading Longhorn system to a newer version.
|
||||
If there are thoudsands of volumes in the system, this is a significant manual work.
|
||||
If there are thousands of volumes in the system, this is a significant manual work.
|
||||
|
||||
After this enhancement users either have to do nothing (in case live upgrade is possible)
|
||||
or they only have to scale down/up the workload (in case there is a new default IM image)
|
||||
|
@ -70,7 +70,7 @@ spec:
|
||||
url: https://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
|
||||
```
|
||||
|
||||
Afterwards deploy the `cirros-rwx-blk.yaml` to create a live migratabale virtual machine.
|
||||
Afterwards deploy the `cirros-rwx-blk.yaml` to create a live migratable virtual machine.
|
||||
```yaml
|
||||
apiVersion: kubevirt.io/v1alpha3
|
||||
kind: VirtualMachine
|
||||
|
@ -155,14 +155,14 @@ With an example of cluster set for 2 zones and default of 2 replicas volume:
|
||||
- The default value is `ignored`.
|
||||
|
||||
- In Volume Controller `syncVolume` -> `ReconcileEngineReplicaState` -> `replenishReplicas`, calculate and add number of replicas to be rebalanced to `replenishCount`.
|
||||
> The logic ignores all `soft-anti-affinity` settings. This will always try to achieve zone balance then node balance. And creating for replicas will leave for ReplicaScheduler to determine for the canidates.
|
||||
> The logic ignores all `soft-anti-affinity` settings. This will always try to achieve zone balance then node balance. And creating for replicas will leave for ReplicaScheduler to determine for the candidates.
|
||||
1. Skip volume replica rebalance when volume spec `replicaAutoBalance` is `disabled`.
|
||||
2. Skip if volume `Robustness` is not `healthy`.
|
||||
3. For `least-effort`, try to get the replica rebalance count.
|
||||
1. For `zone` duplicates, get the replenish number.
|
||||
1. List all the occupied node zones with volume replicas running.
|
||||
- The zone is balanced when this is equal to volume spec `NumberOfReplicas`.
|
||||
2. List all available and schedulabled nodes in non-occupied zones.
|
||||
2. List all available and schedulable nodes in non-occupied zones.
|
||||
- The zone is balanced when no available nodes are found.
|
||||
3. Get the number of replicas off-balanced:
|
||||
- number of replicas in volume spec - number of occupied node zones.
|
||||
|
@ -354,7 +354,7 @@ Labels
|
||||
[labels/2]: [b]
|
||||
```
|
||||
- `Name` field should be immutable.
|
||||
- `Task` field should be imuutable.
|
||||
- `Task` field should be immutable.
|
||||
|
||||
*And* user edit the fields in the form.
|
||||
|
||||
@ -976,11 +976,11 @@ Scenario: test recurring job concurrency
|
||||
create volume `test-job-4`.
|
||||
create volume `test-job-5`.
|
||||
|
||||
Then moniter the cron job pod log.
|
||||
Then monitor the cron job pod log.
|
||||
And should see 2 jobs created concurrently.
|
||||
|
||||
When update `snapshot1` recurring job with `concurrency` set to `3`.
|
||||
Then moniter the cron job pod log.
|
||||
Then monitor the cron job pod log.
|
||||
And should see 3 jobs created concurrently.
|
||||
|
||||
### Upgrade strategy
|
||||
|
@ -329,7 +329,7 @@ After the enhancement, users can directly specify the BackingImage during volume
|
||||
- Longhorn needs to verify the BackingImage if it's specified.
|
||||
- For restore/DR volumes, the BackingImage name stored in the backup volume will be used automatically if users do not specify the BackingImage name. Verify the checksum before using the BackingImage.
|
||||
- Snapshot backup:
|
||||
- BackingImage name and checksum will be recored into BackupVolume now.
|
||||
- BackingImage name and checksum will be record into BackupVolume now.
|
||||
- BackingImage creation:
|
||||
- Need to create both BackingImage CR and the BackingImageDataSource CR. Besides, a random ready disk will be picked up so that Longhorn can prepare the 1st file for the BackingImage immediately.
|
||||
- BackingImage get/list:
|
||||
@ -337,7 +337,7 @@ After the enhancement, users can directly specify the BackingImage during volume
|
||||
- BackingImageDataSource has not been created. Add retry would solve this case.
|
||||
- BackingImageDataSource is gone but BackingImage has not been cleaned up. Longhorn can ignore BackingImageDataSource when BackingImage deletion timestamp is set.
|
||||
- BackingImage disk cleanup:
|
||||
- This cannot break the HA besides affacting replicas. The main idea is similar to the cleanup in BackingImage Controller.
|
||||
- This cannot break the HA besides attaching replicas. The main idea is similar to the cleanup in BackingImage Controller.
|
||||
9. In CSI:
|
||||
- Check the backing image during the volume creation.
|
||||
- The missing BackingImage will be created when both BackingImage name and data source info are provided.
|
||||
@ -353,7 +353,7 @@ After the enhancement, users can directly specify the BackingImage during volume
|
||||
- The server will download the file immediately once the type is `download` and the server is up.
|
||||
- A cancelled context will be put the HTTP download request. When the server is stopped/failed while downloading is still in-progress, the context can help stop the download.
|
||||
- The service will wait for 30s at max for download start. If time exceeds, the download is considered as failed.
|
||||
- The download file is in `<Disk path in containter>/tmp/<BackingImage name>-<BackingImage UUID>`
|
||||
- The download file is in `<Disk path in container>/tmp/<BackingImage name>-<BackingImage UUID>`
|
||||
- Each time when the image downloads a chunk of data, the progress will be updated. For the first time updating the progress, it means the downloading starts and the state will be updated from `starting` to `in-progress`.
|
||||
- The server is ready for handling the uploaded data once the type is `upload` and the server is up.
|
||||
- The query `size` is required for the API `upload`.
|
||||
@ -370,7 +370,7 @@ After the enhancement, users can directly specify the BackingImage during volume
|
||||
- Similar to `Fetch`, the image will try to reuse existing files.
|
||||
- The manager is responsible for managing all port. The image will use the functions provided by the manager to get then release ports.
|
||||
- API `Send`: Send a backing image file to a receiver. This should be similar to replica rebuilding.
|
||||
- API `Delete`: Unregister the image then delete the imge work directory. Make sure syncing or pulling will be cancelled if exists.
|
||||
- API `Delete`: Unregister the image then delete the image work directory. Make sure syncing or pulling will be cancelled if exists.
|
||||
- API `Get`/`List`: Collect the status of one backing image file/all backing image files.
|
||||
- API `Watch`: establish a streaming connection to report BackingImage file info.
|
||||
- As I mentioned above, we will use BackingImage UUID to generate work directories for each BackingImage. The work directory is like:
|
||||
|
@ -190,7 +190,7 @@ Using those methods, the Sparse-tools know where is a data/hole interval to tran
|
||||
|
||||
### Longhorn CSI plugin
|
||||
* Advertise that Longhorn CSI driver has ability to clone a volume, `csi.ControllerServiceCapability_RPC_CLONE_VOLUME`
|
||||
* When receiving a volume creat request, inspect `req.GetVolumeContentSource()` to see if it is from anther volume.
|
||||
* When receiving a volume creat request, inspect `req.GetVolumeContentSource()` to see if it is from another volume.
|
||||
If so, create a new Longhorn volume with appropriate `DataSource` set so Longhorn volume controller can start cloning later on.
|
||||
|
||||
### Test plan
|
||||
|
@ -0,0 +1,290 @@
|
||||
# Title
|
||||
|
||||
Extend CSI snapshot to support Longhorn snapshot
|
||||
|
||||
## Summary
|
||||
|
||||
Before this feature, if the user uses [the CSI Snapshotter mechanism](https://kubernetes-csi.github.io/docs/snapshot-restore-feature.html),
|
||||
they can only create Longhorn backups (out of cluster). We want to extend the CSI Snapshotter to support creating for
|
||||
Longhorn snapshot (in-cluster) as well.
|
||||
|
||||
### Related Issues
|
||||
|
||||
https://github.com/longhorn/longhorn/issues/2534
|
||||
|
||||
## Motivation
|
||||
|
||||
### Goals
|
||||
|
||||
Extend the CSI Snapshotter to support:
|
||||
* Creating Longhorn snapshot
|
||||
* Deleting Longhorn snapshot
|
||||
* Creating a new PVC from a CSI snapshot that is associated with a Longhorn snapshot
|
||||
|
||||
### Non-goals
|
||||
|
||||
* Longhorn snapshot Reverting is not a goal because CSI snapshotter doesn't support replace in place for now:
|
||||
https://github.com/container-storage-interface/spec/blob/master/spec.md#createsnapshot
|
||||
|
||||
## Proposal
|
||||
|
||||
### User Stories
|
||||
|
||||
Before this feature is implemented, users can only use CSI Snapshotter to create/restore Longhorn backups.
|
||||
This means that users must set up a backup target outside of the cluster. Uploading/downloading data from
|
||||
backup target is a long/costly operation. Sometimes, users might just want to use CSI Snapshotter to take
|
||||
an in-cluster Longhorn snapshot and create a new volume from that snapshot. The Longhorn snapshot operation
|
||||
is cheap and faster than the backup operation and doesn't require setting up a backup target.
|
||||
|
||||
### User Experience In Detail
|
||||
|
||||
To use this feature, users need to do:
|
||||
1. Deploy the CSI snapshot CRDs, Controller as instructed at https://longhorn.io/docs/1.2.3/snapshots-and-backups/csi-snapshot-support/enable-csi-snapshot-support/
|
||||
1. Deploy a VolumeSnapshotClass with the parameter `type: longhorn-snapshot`. I.e.,
|
||||
```yaml
|
||||
kind: VolumeSnapshotClass
|
||||
apiVersion: snapshot.storage.k8s.io/v1beta1
|
||||
metadata:
|
||||
name: longhorn-snapshot
|
||||
driver: driver.longhorn.io
|
||||
deletionPolicy: Delete
|
||||
parameters:
|
||||
type: longhorn-snapshot
|
||||
```
|
||||
1. To create a new CSI snapshot associated with a Longhorn snapshot of the volume `test-vol`, users deploy the following VolumeSnapshot CR:
|
||||
```yaml
|
||||
apiVersion: snapshot.storage.k8s.io/v1beta1
|
||||
kind: VolumeSnapshot
|
||||
metadata:
|
||||
name: test-snapshot
|
||||
spec:
|
||||
volumeSnapshotClassName: longhorn-snapshot
|
||||
source:
|
||||
persistentVolumeClaimName: test-vol
|
||||
```
|
||||
A new Longhorn snapshot is created for the volume `test-vol`
|
||||
1. To create a new PVC from the CSI snapshot, users can deploy the following yaml:
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: test-restore-snapshot-pvc
|
||||
spec:
|
||||
storageClassName: longhorn
|
||||
dataSource:
|
||||
name: test-snapshot
|
||||
kind: VolumeSnapshot
|
||||
apiGroup: snapshot.storage.k8s.io
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 5Gi # should be the same as the size of `test-vol`
|
||||
```
|
||||
A new PVC will be created with the same content as in the VolumeSnapshot `test-snapshot`
|
||||
1. Deleting the VolumeSnapshot `test-snapshot` will lead to the deletion of the corresponding Longhorn snapshot of the volume `test-vol`
|
||||
|
||||
### API changes
|
||||
None
|
||||
|
||||
## Design
|
||||
|
||||
### Implementation Overview
|
||||
|
||||
We follow the specification in [the CSI spec](https://github.com/container-storage-interface/spec/blob/master/spec.md#createsnapshot) when supporting the CSI snapshot.
|
||||
|
||||
We define a new parameter in the VolumeSnapshotClass `type`.
|
||||
The value of the parameter `type` can be `longhorn-snapshot` or `longhorn-backup`.
|
||||
When `type` is `longhorn-snapshot` it means that the CSI VolumeSnapshot created with this VolumeSnapshotClass is associated with a Longhorn snapshot.
|
||||
When `type` is `longhorn-backup` it means that the CSI VolumeSnapshot created with this VolumeSnapshotClass is associated with a Longhorn backup.
|
||||
|
||||
In [CreateSnapshot function](https://github.com/longhorn/longhorn-manager/blob/878cfb868c568396d6ebfa4ce096c5d95d9b31e3/csi/controller_server.go#L539), we get the
|
||||
value of parameter `type`. If it is `longhorn-backup`, we take a Longhorn backup as before. If it is `longhorn-snapshot` we do:
|
||||
* Get the name of the Longhorn volume
|
||||
* Check if the volume is in attached state.
|
||||
If it is not, return `codes.FailedPrecondition`.
|
||||
We cannot take a snapshot of non-attached volume.
|
||||
* Check if a Longhorn snapshot with the same name as the requested CSI snapshot already exists.
|
||||
If yes, return OK without taking a new Longhorn snapshot.
|
||||
* Take a new Longhorn snapshot. Encode the snapshotId in the format `snap://volume-name/snapshot-name`.
|
||||
This snaphotId will be used in the later CSI CreateVolume and DeleteSnapshot call.
|
||||
|
||||
In [CreateVolume function](https://github.com/longhorn/longhorn-manager/blob/878cfb868c568396d6ebfa4ce096c5d95d9b31e3/csi/controller_server.go#L63):
|
||||
* If the VolumeContentSource is a `VolumeContentSource_Snapshot` type, decode the snapshotId in the format from the above step.
|
||||
* Create a new volume with the `dataSource` set to `snap://volume-name/snapshot-name`. This will trigger Longhorn to clone the content of the snapshot to the new volume.
|
||||
Note that if the source volume is not attached, Longhorn cannot verify the existence of the snapshot inside the Longhorn volume.
|
||||
This means that [the API will return error](https://github.com/longhorn/longhorn-manager/blob/878cfb868c568396d6ebfa4ce096c5d95d9b31e3/manager/volume.go#L347-L352) and new PVC cannot be provisioned.
|
||||
|
||||
In [DeleteSnapshot function](https://github.com/longhorn/longhorn-manager/blob/878cfb868c568396d6ebfa4ce096c5d95d9b31e3/csi/controller_server.go#L675):
|
||||
* Decode the snapshotId in the format from the above step.
|
||||
If the type is `longhorn-backup` we delete the backup as before.
|
||||
If the type is `longhorn-snapshot`, we delete the corresponding Longhorn snapshot of the source volume.
|
||||
If the source volume or the snapshot is no longer exist, we return OK as specified in [the CSI spec](https://github.com/container-storage-interface/spec/blob/master/spec.md#deletesnapshot)
|
||||
|
||||
### Test plan
|
||||
|
||||
Integration test plan.
|
||||
|
||||
1. Deploy the CSI snapshot CRDs, Controller as instructed at https://longhorn.io/docs/1.2.3/snapshots-and-backups/csi-snapshot-support/enable-csi-snapshot-support/
|
||||
1. Deploy 4 VolumeSnapshotClass:
|
||||
```yaml
|
||||
kind: VolumeSnapshotClass
|
||||
apiVersion: snapshot.storage.k8s.io/v1beta1
|
||||
metadata:
|
||||
name: longhorn-backup-1
|
||||
driver: driver.longhorn.io
|
||||
deletionPolicy: Delete
|
||||
```
|
||||
```yaml
|
||||
kind: VolumeSnapshotClass
|
||||
apiVersion: snapshot.storage.k8s.io/v1beta1
|
||||
metadata:
|
||||
name: longhorn-backup-2
|
||||
driver: driver.longhorn.io
|
||||
deletionPolicy: Delete
|
||||
parameters:
|
||||
type: longhorn-backup
|
||||
```
|
||||
```yaml
|
||||
kind: VolumeSnapshotClass
|
||||
apiVersion: snapshot.storage.k8s.io/v1beta1
|
||||
metadata:
|
||||
name: longhorn-snapshot
|
||||
driver: driver.longhorn.io
|
||||
deletionPolicy: Delete
|
||||
parameters:
|
||||
type: longhorn-snapshot
|
||||
```
|
||||
```yaml
|
||||
kind: VolumeSnapshotClass
|
||||
apiVersion: snapshot.storage.k8s.io/v1beta1
|
||||
metadata:
|
||||
name: invalid-class
|
||||
driver: driver.longhorn.io
|
||||
deletionPolicy: Delete
|
||||
parameters:
|
||||
type: invalid
|
||||
```
|
||||
1. Create Longhorn volume `test-vol` of 5GB. Create PV/PVC for the Longhorn volume.
|
||||
1. Create a workload that uses the volume. Write some data to the volume.
|
||||
Make sure data persist to the volume by running `sync`
|
||||
1. Set up a backup target for Longhorn
|
||||
|
||||
#### Scenarios 1: CreateSnapshot
|
||||
* `type` is `longhorn-backup` or `""`
|
||||
|
||||
* Create a VolumeSnapshot with the following yaml
|
||||
```yaml
|
||||
apiVersion: snapshot.storage.k8s.io/v1beta1
|
||||
kind: VolumeSnapshot
|
||||
metadata:
|
||||
name: test-snapshot-longhorn-backup
|
||||
spec:
|
||||
volumeSnapshotClassName: longhorn-backup-1
|
||||
source:
|
||||
persistentVolumeClaimName: test-vol
|
||||
```
|
||||
* Verify that a backup is created.
|
||||
* Delete the `test-snapshot-longhorn-backup`
|
||||
* Verify that the backup is deleted
|
||||
* Create the `test-snapshot-longhorn-backup` VolumeSnapshot with `volumeSnapshotClassName: longhorn-backup-2`
|
||||
* Verify that a backup is created.
|
||||
* `type` is `longhorn-snapshot`
|
||||
* volume is in detached state.
|
||||
* Scale down the workload of `test-vol` to detach the volume.
|
||||
* Create `test-snapshot-longhorn-snapshot` VolumeSnapshot with `volumeSnapshotClassName: longhorn-snapshot`.
|
||||
* Verify the error `volume ... invalid state ... for taking snapshot` in the Longhorn CSI plugin.
|
||||
* volume is in attached state.
|
||||
* Scale up the workload to attach `test-vol`
|
||||
* Verify that a Longhorn snapshot is created for the `test-vol`.
|
||||
* invalid type
|
||||
* Create `test-snapshot-invalid` VolumeSnapshot with `volumeSnapshotClassName: invalid-class`.
|
||||
* Verify the error `invalid snapshot type: %v. Must be %v or %v or` in the Longhorn CSI plugin.
|
||||
* Delete `test-snapshot-invalid` VolumeSnapshot.
|
||||
|
||||
#### Scenarios 2: Create new volume from CSI snapshot
|
||||
* From `longhorn-backup` type
|
||||
* Create a new PVC with the flowing yaml:
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: test-restore-pvc
|
||||
spec:
|
||||
storageClassName: longhorn
|
||||
dataSource:
|
||||
name: test-snapshot-longhorn-backup
|
||||
kind: VolumeSnapshot
|
||||
apiGroup: snapshot.storage.k8s.io
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 5Gi
|
||||
```
|
||||
* Attach the PVC `test-restore-pvc` and verify the data
|
||||
* Delete the PVC
|
||||
* From `longhorn-snapshot` type
|
||||
* Source volume is attached && Longhorn snapshot exist
|
||||
* Create a PVC with the following yaml:
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: test-restore-pvc
|
||||
spec:
|
||||
storageClassName: longhorn
|
||||
dataSource:
|
||||
name: test-snapshot-longhorn-snapshot
|
||||
kind: VolumeSnapshot
|
||||
apiGroup: snapshot.storage.k8s.io
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 5Gi
|
||||
```
|
||||
* Attach the PVC `test-restore-pvc` and verify the data
|
||||
* Delete the PVC
|
||||
* Source volume is detached
|
||||
* Scale down the workload to detach the `test-vol`
|
||||
* Create the same PVC `test-restore-pvc` as in the `Source volume is attached && Longhorn snapshot exist` section
|
||||
* Verify that PVC provisioning failed because the source volume is detached so Longhorn cannot verify the existence of the Longhorn snapshot in the source volume.
|
||||
* Scale up the workload to attach `test-vol`
|
||||
* Wait for PVC to finish provisioning and be bounded
|
||||
* Attach the PVC `test-restore-pvc` and verify the data
|
||||
* Delete the PVC
|
||||
* Source volume is attached && Longhorn snapshot doesn’t exist
|
||||
* Find the VolumeSnapshotContent of the VolumeSnapshot `test-snapshot-longhorn-snapshot`.
|
||||
Find the Longhorn snapshot name inside the field `VolumeSnapshotContent.snapshotHandle`.
|
||||
Go to Longhorn UI. Delete the Longhorn snapshot.
|
||||
* Repeat steps in the section `Longhorn snapshot exist` above.
|
||||
PVC should be stuck in provisioning because Longhorn snapshot of the source volume doesn't exist.
|
||||
* Delete the PVC `test-restore-pvc` PVC
|
||||
|
||||
#### Scenarios 3: Delete CSI snapshot
|
||||
* `longhorn-backup` type
|
||||
* Done in the above step
|
||||
* `longhorn-snapshot` type
|
||||
* volume is attached && snapshot doesn’t exist
|
||||
* Delete the VolumeSnapshot `test-snapshot-longhorn-snapshot` and verify that the VolumeSnapshot is deleted.
|
||||
* volume is attached && snapshot exist
|
||||
* Recreate the VolumeSnapshot `test-snapshot-longhorn-snapshot`
|
||||
* Verify the creation of Longhorn snapshot with the name in the field `VolumeSnapshotContent.snapshotHandle`
|
||||
* Delete the VolumeSnapshot `test-snapshot-longhorn-snapshot`
|
||||
* Verify that Longhorn snapshot is removed or marked as removed
|
||||
* Verify that the VolumeSnapshot `test-snapshot-longhorn-snapshot` is deleted.
|
||||
* volume is detached
|
||||
* Recreate the VolumeSnapshot `test-snapshot-longhorn-snapshot`
|
||||
* Scale down the workload to detach `test-vol`
|
||||
* Delete the VolumeSnapshot `test-snapshot-longhorn-snapshot`
|
||||
* Verify that VolumeSnapshot `test-snapshot-longhorn-snapshot` is stuck in deleting
|
||||
|
||||
|
||||
### Upgrade strategy
|
||||
|
||||
No upgrade strategy needed
|
||||
|
||||
## Note [optional]
|
||||
|
||||
We need to update the docs and examples to reflect the new parameter in the VolumeSnapshotClass, `type`.
|
91
enhancements/20220317-snapshot-prune.md
Normal file
91
enhancements/20220317-snapshot-prune.md
Normal file
@ -0,0 +1,91 @@
|
||||
# Snapshot Prune
|
||||
|
||||
## Summary
|
||||
Snapshot prune is a new snapshot-purge-related operation that helps **reclaim some space** from the snapshot file that is already marked as _Removed_ but **cannot be completely deleted**. This kind of snapshot is typically the one directly stands behind the volume head.
|
||||
|
||||
### Related Issues
|
||||
https://github.com/longhorn/longhorn/issues/3613
|
||||
|
||||
## Motivation
|
||||
### Goals
|
||||
Snapshots could store historical data for a volume. This means extra space will be required, and the volume actual size can be much greater than the spec size.
|
||||
To avoid existing volumes using too much space, users can clean up snapshots by marking the snapshots as _Removed_ then waiting for Longhorn purging them.
|
||||
But there is one issue: By design, the snapshot that directly stands behind the volume head, as known as the latest snapshot, cannot be purged by Longhorn after being marked as _Removed_. The space consumed by it cannot be released any matter if users care about historical data or not.
|
||||
Hence, Longhorn should do something special to reclaim space "wasted" by this kind of snapshot.
|
||||
|
||||
### Non-goals:
|
||||
Volume trim/shrink: https://github.com/longhorn/longhorn/issues/836
|
||||
|
||||
## Proposal
|
||||
1. Deleting a snapshot consists of 2 steps, marking the snapshot as _Removed_ then waiting for Longhorn purging it. And the snapshot purge consists of 3 steps: copy data from the newer snapshot to the old snapshot, replace the new snapshot with the updated old snapshot, remove the new snapshot.
|
||||
This operation is named "coalesce" or "fold" in Longhorn. As mentioned before, it cannot be applied to the latest snapshot file since the newer one of it is actually the volume head, which cannot be modified by others except for users/workloads.
|
||||
In other words, we cannot use this operation to handle the latest snapshot.
|
||||
```
|
||||
+--------------+ +--------------+ +--------------+
|
||||
| Snapshot A | --- | Snapshot B | --- | Volume head |
|
||||
+--------------+ +--------------+ +--------------+
|
||||
^
|
||||
|
|
||||
Marked Snapshot A (the old snapshot) as _Removed_
|
||||
|
||||
+--------------+ +--------------+ +--------------+
|
||||
| Snapshot A | --- | Snapshot B | --- | Volume head |
|
||||
+--------------+ +--------------+ +--------------+
|
||||
^ |
|
||||
+---------------------+
|
||||
Copy data from the Snapshot B (the newer snapshot) to Snapshot A
|
||||
|
||||
+---------------------------------+ +--------------+
|
||||
| Rename snapshot A to snapshot B | ----- | Volume head |
|
||||
+---------------------------------+ +--------------+
|
||||
^
|
||||
|
|
||||
Delete Snapshot B then rename snapshot A to Snapshot B
|
||||
```
|
||||
2. Longhorn needs to somehow reclaim the space from the latest snapshot without directly deleting the file itself or modifying the volume head.
|
||||
Notice that Longhorn can still read the volume head as well as modify the snapshot once the snapshot itself is marked as _Removed_. This means we can detect which part of the latest snapshot is overwritten by the volume head. Then punching holes in the overlapping parts of the snapshot would reclaim the space.
|
||||
Here, we call this new operation as "prune".
|
||||
```
|
||||
+--------------+ +---------------+
|
||||
| Snapshot A | --- | Volume head |
|
||||
+--------------+ +---------------+
|
||||
^ |
|
||||
+---------------------+
|
||||
Snapshot A is the latest snapshot of the volume.
|
||||
Longhorn will scan the volume head. For each data chunk of the volume head, Longhorn will punch a hole at the same position for snapshot A.
|
||||
```
|
||||
3. Punching holes means modifying the data of the snapshot. Therefore, once the snapshot is marked as _Removed_ and the cleanup happens, Longhorn should not allow users to revert to the snapshot anymore. This is the prerequisite of this enhancement.
|
||||
This snapshot revert issue is handled in https://github.com/longhorn/longhorn/issues/3748.
|
||||
|
||||
### User Stories
|
||||
#### Cleanup the data of the latest snapshot
|
||||
Before the enhancement, users need to create a new snapshot, then remove the target snapshot so that Longhorn will coalesce the target snapshot with the newly created one. But the issue is, the volume head would be filled up later, and users may loop into redoing the operation to reclaim the space occupied by the historical data of the snapshot.
|
||||
|
||||
After the enhancement, as long as there is no newer snapshot created, users can directly reclaim the space from the latest snapshot by simply deleting the snapshot via UI.
|
||||
|
||||
### User Experience In Detail
|
||||
Assume that there are heavy writing tasks for a volume and the only snapshot is filled up with the historical data (this snapshot may be created by rebuilding or backup). The actual size of the volume is typical twice the spec size.
|
||||
Now users just need to remove the only/latest snapshot via UI, Longhorn would reclaim almost all space used by the snapshot, which is the spec size here.
|
||||
Then as long as users don't create a new snapshot, the actual size of this volume is the space used by the volume head only, which is up to the spec size in total.
|
||||
|
||||
### API Changes
|
||||
N/A
|
||||
|
||||
## Design
|
||||
### Implementation Overview
|
||||
#### longhorn-engine:
|
||||
When the snapshot purge is triggered, replicas will identify if the snapshot being removed is the latest snapshot by checking one child of it is the volume head. If YES, they will start the snapshot pruning operation:
|
||||
1. Before pruning, replicas will make sure the apparent size of the snapshot is the same as that of the volume head. If No, we will truncate/expand the snapshot first.
|
||||
2. During pruning, replicas need to iterate the volume head fiemap. Then as long as there is a data chunk found in the volume head file, they will blindly punch a hole at the same position of the snapshot file.
|
||||
If there are multiple snapshots including the latest one being removed simultaneously, we need to make sure the pruning is done only after all the other snapshots have done coalescing and deletion.
|
||||
|
||||
#### longhorn-ui:
|
||||
Allow users to remove the snapshots that are already marked as Removed. And in this case, the frontend just needs to send a `SnapshotPurge` call to the backend.
|
||||
|
||||
### Test Plan
|
||||
#### Integration tests
|
||||
Test this snapshot prune operations with snapshot coalesce, snapshot revert, and volume expansion.
|
||||
|
||||
### Upgrade strategy
|
||||
N/A
|
||||
|
172
enhancements/20220324-orphaned-data-cleanup.md
Normal file
172
enhancements/20220324-orphaned-data-cleanup.md
Normal file
@ -0,0 +1,172 @@
|
||||
# Orphaned Replica Directory Cleanup
|
||||
|
||||
## Summary
|
||||
|
||||
Orphaned replica directory cleanup identifies unmanaged replicas on the disks and provides a list of the orphaned replica directory on each node. Longhorn will not delete the replicas automatically, preventing deletions by mistake. Instead, it allows the user to select and trigger the deletion of the orphaned replica directory manually or deletes the orphaned replica directories automatically.
|
||||
|
||||
### Related Issues
|
||||
|
||||
[https://github.com/longhorn/longhorn/issues/685](https://github.com/longhorn/longhorn/issues/685)
|
||||
|
||||
## Motivation
|
||||
|
||||
### Goals
|
||||
|
||||
- Identify the orphaned replica directories
|
||||
- The scanning process should not stuck the reconciliation of the controller
|
||||
- Provide user a way to select and trigger the deletion of the orphaned replica directories
|
||||
- Support the global auto-deletion of orphaned replica directories
|
||||
### Non-goals
|
||||
|
||||
- Clean up unknown files or directories in disk paths
|
||||
- Support the per-node auto-deletion of orphaned replica directories
|
||||
- Support the auto-deletion of orphaned replica directories exceeded the TTL
|
||||
|
||||
## Proposal
|
||||
|
||||
1. Introduce a new CRD `orphan` and controller that represents and tracks the orphaned replica directories. The controller deletes the physical data and the resource if receive a deletion request.
|
||||
|
||||
|
||||
2. The monitor on each node controller is created to periodically collects the on-disk replica directories, compares them with the scheduled replica, and then finds the orphaned replica directories.
|
||||
|
||||
The reconciliation loop of the node controller gets the latest disk status and orphaned replica directories from the monitor and update the state of the node. Additionally, the `orphan` resources associated with the orphaned replica directories are created.
|
||||
|
||||
```
|
||||
|
||||
queue ┌───────────────┐ ┌──────────────────────┐
|
||||
┌┐ ┌┐ ┌┐ │ │ │ │
|
||||
... ││ ││ ││ ──────► │ syncNode() ├────────────►│ reconcile() │
|
||||
└┘ └┘ └┘ │ │ │ │
|
||||
└───────────────┘ └───────────┬──────────┘
|
||||
│
|
||||
syncWithMonitor │
|
||||
│
|
||||
┌───────────▼──────────┐
|
||||
│ │
|
||||
│ per-node monitor │
|
||||
│ |
|
||||
┤ collect information │
|
||||
│ │
|
||||
└──────────────────────┘
|
||||
|
||||
```
|
||||
|
||||
### User Stories
|
||||
When a user introduces a disk into a Longhorn node, it may contain replica directories that are not tracked by the Longhorn system. The untracked replica directories may belong to other Longhorn clusters. Or, the replica CRs associated with the replica directories are removed after the node or the disk is down. When the node or the disk comes back, the corresponding replica data directories are no longer tracked by the Longhorn system. These replica data directories are called orphaned.
|
||||
|
||||
Longhorn's disk capacity is taken up by the orphaned replica directories. Users need to compare the on-disk replica directories with the replicas tracked by the Longhorn system on each node and then manually delete the orphaned replica directories. The process is tedious and time-consuming for users.
|
||||
|
||||
After the enhancement, Longhorn automatically finds out the orphaned replica directories on Longhorn nodes. Users can visualize and manage the orphaned replica directories via Longhorn GUI or command line tools. Additionally, Longhorn can deletes the orphaned replica directories automatically if users enable the global auto-deletion option.
|
||||
|
||||
### User Experience In Detail
|
||||
|
||||
- Via Longhorn GUI
|
||||
- Users can check Node and Disk status then see if Longhorn already identifies orphaned replicas.
|
||||
- Users can choose the items in the orphaned replica directory list then clean up them.
|
||||
- Users can enable the global auto-deletion on setting page. By default, the auto-deletion is disabled.
|
||||
|
||||
- Via `kubectl`
|
||||
- Users can list the orphaned replica directories by `kubectl -n longhorn-system get orphans`.
|
||||
- Users can delete the orphaned replica directories by `kubectl -n longhorn-system delete orphan <name>`.
|
||||
- Users can enable the global auto-deletion by `kubectl -n longhorn-system edit settings orphan-auto-deletion`
|
||||
|
||||
## Design
|
||||
|
||||
### Implementation Overview
|
||||
**Settings**
|
||||
- Add setting `orphan-auto-deletion`. Default value is `false`.
|
||||
|
||||
**Node controller**
|
||||
- Start the monitor during initialization.
|
||||
- Sync with the monitor in each reconcile loop.
|
||||
- Update the node/disk status.
|
||||
- Create `orphan` CRs based on the information collected by the monitor.
|
||||
- Delete the `orphan` CRs if the node/disk is requested to be evicted.
|
||||
- Delete the `orphan` CRs if the corresponding directories disappear.
|
||||
- Delete the `orphan` CRs if the auto-deletion setting is enabled.
|
||||
|
||||
**Node monitor**
|
||||
- Struct
|
||||
```go
|
||||
type NodeMonitor struct {
|
||||
logger logrus.FieldLogger
|
||||
|
||||
ds *datastore.DataStore
|
||||
|
||||
node longhorn.Node
|
||||
lock sync.RWMutex
|
||||
|
||||
onDiskReplicaDirectories map[string][string]string
|
||||
|
||||
syncCallback func(key string)
|
||||
|
||||
ctx context.Context
|
||||
quit context.CancelFunc
|
||||
}
|
||||
```
|
||||
- Periodically detect and verify disk
|
||||
|
||||
- Run `stat`
|
||||
- Check disk FSID
|
||||
- Check disk UUID in the metafile
|
||||
- Periodically check and identify orphan directories
|
||||
|
||||
- List on-disk directories in `${disk_path}/replicas` and compare them with the last record stored in `monitor.onDiskDirectoriesInReplicas`.
|
||||
- If the two lists are different, iterate all directories in `${disk_path}/replicas` and then get the list of the orphaned replica directories.
|
||||
|
||||
A valid replica directory has the properties:
|
||||
- The directory name format is `<disk path>/replicas/<replica name>-<random string>`
|
||||
- `<disk path>/replicas/<replica name>-<random string>/volume.meta` is parsible and follows the `volume.meta`'s format.
|
||||
|
||||
- Compare the list of the orphaned replica directories with the `node.status.diskStatus.scheduledReplica` and find out the list of the orphaned replica directories. Store the list in `monitor.node.status.diskStatus.orphanedReplicaDirectoryNames`
|
||||
|
||||
**Orphan controller**
|
||||
- Struct:
|
||||
```go
|
||||
// OrphanSpec defines the desired state of the Longhorn orphaned data
|
||||
type OrphanSpec struct {
|
||||
// The node ID on which the controller is responsible to reconcile this orphan CR.
|
||||
// +optional
|
||||
NodeID string `json:"nodeID"`
|
||||
// The type of the orphaned data.
|
||||
// Can be "replica".
|
||||
// +optional
|
||||
Type OrphanType `json:"type"`
|
||||
|
||||
// The parameters of the orphaned data
|
||||
// +optional
|
||||
// +nullable
|
||||
Parameters map[string]string `json:"parameters"`
|
||||
}
|
||||
|
||||
// OrphanStatus defines the observed state of the Longhorn orphaned data
|
||||
type OrphanStatus struct {
|
||||
// +optional
|
||||
OwnerID string `json:"ownerID"`
|
||||
// +optional
|
||||
// +nullable
|
||||
Conditions []Condition `json:"conditions"`
|
||||
}
|
||||
```
|
||||
- If receive the deletion request, delete the on-disk orphaned replica directory and the `orphan` resource.
|
||||
|
||||
- If the auto-deletion is enabled, node controller will issues the orphans deletion requests.
|
||||
|
||||
**longhorn-ui**
|
||||
|
||||
- Allow users to list the orphans on the node page by sending `OrphanList` call to the backend.
|
||||
- Allow users to select the orphans to be deleted. The frontend needs to send `OrphanDelete` call to the backend.
|
||||
|
||||
|
||||
### Test Plan
|
||||
|
||||
**Integration tests**
|
||||
|
||||
- `orphan` CRs will be created correctly in the disk path. And they can be cleaned up with the directories.
|
||||
- `orphan` CRs will be created correctly when there are multiple kinds of files/directories in the disk path. And they can be cleaned up with the directories.
|
||||
- `orphan` CRs will be removed when the replica directories disappear.
|
||||
- `orphan` CRs will be removed when the node/disk is evicted or down. The associated orphaned replica directories should not be cleaned up.
|
||||
- Auto-deletion setting.
|
||||
|
||||
|
||||
## Note[optional]
|
177
enhancements/20220408-support-kubernetes-ca.md
Normal file
177
enhancements/20220408-support-kubernetes-ca.md
Normal file
@ -0,0 +1,177 @@
|
||||
# Support Kubernetes Cluster Autoscaler
|
||||
|
||||
Longhorn should support Kubernetes Cluster Autoscaler.
|
||||
|
||||
## Summary
|
||||
|
||||
Currently, Longhorn pods are [blocking CA from removing a node](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-types-of-pods-can-prevent-ca-from-removing-a-node). This proposes to introduce a new global setting `kubernetes-cluster-autoscaler-enabled` that will annotate Longhorn components and also add logic for instance-manager PodDisruptionBudget management.
|
||||
|
||||
### Related Issues
|
||||
|
||||
https://github.com/longhorn/longhorn/issues/2203
|
||||
|
||||
## Motivation
|
||||
|
||||
### Goals
|
||||
|
||||
- Longhorn should block CA from scaling down if a node met ANY condition:
|
||||
- Any volume attached
|
||||
- Contains a backing image manager pod
|
||||
- Contains a share manager pod
|
||||
- Longhorn should not block CA from scaling down if a node met ALL conditions:
|
||||
- All volume detached and there is another schedulable node with volume replica and replica IM PDB.
|
||||
- Not contain a backing image manager pod
|
||||
- Not contain a share manager pod
|
||||
|
||||
### Non-goals [optional]
|
||||
|
||||
- CA setup.
|
||||
- CA blocked by kube-system components.
|
||||
- CA blocked by backing image manager pod. (TODO)
|
||||
- CA blocked by share manager pod. (TODO)
|
||||
|
||||
## Proposal
|
||||
Set `kubernetes-cluster-autoscaler-enabled` adds `cluster-autoscaler.kubernetes.io/safe-to-evict` annotation to Longhorn pods that are not backed by a controller, or with local storage volume mounts. To avoid data loss, Longhorn does not annotate the backing image manager and share manager pods.
|
||||
|
||||
Currently, Longhorn creates instance-manager PDBs for replica/engine regardless of the volume state.
|
||||
During scale down, CA tries to find a removable node but failed by those instance-manager PDBs.
|
||||
|
||||
We can add IM PDB handling to create and retained when the PDB is required:
|
||||
|
||||
- There are volumes/engines running on the node. We need to guarantee that the volumes won't crash.
|
||||
- The only available/valid replica of a volume is on the node. Here we need to prevent the volume data from being lost.
|
||||
|
||||
### User Stories
|
||||
|
||||
#### CA scaling
|
||||
Before the enhancement, CA will be blocked by
|
||||
- Pods that are not backed by a controller (engine/replica instance manager).
|
||||
- Pods with [local storage volume mounts](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/utils/drain/drain.go#L222) (longhorn-ui, longhorn-csi-plugin, csi-attacher, csi-provisioner, csi-resizer, csi-snapshotter).
|
||||
|
||||
After enhancement, instance manager PDB will be actively managed by Longhorn:
|
||||
- Creates all engine/replica instance manager PDB when the volume is attached.
|
||||
- Delete engine instance manager PDB when the volume is detached.
|
||||
- Delete but keep 1 replica instance manager PDB when the volume is detached.
|
||||
|
||||
the user can set a new global setting `kubernetes-cluster-autoscaler-enabled` to unblock CA scaling. This allows Longhorn to annotate Longhorn-managed deployments and engine/replica instance manager pods with `cluster-autoscaler.kubernetes.io/safe-to-evict`.
|
||||
|
||||
|
||||
### User Experience In Detail
|
||||
|
||||
- Configure the setting via Longhorn UI or kubectl.
|
||||
- Ensure all volume replica count is set to more than 1.
|
||||
- CA is not blocked by Longhorn components when the node doesn't contain volume replica, backing image manager pod, and share manager pod.
|
||||
- Engine/Replica instance-manager PDB will block the node if the volume is attached.
|
||||
- Replica instance-manager PDB will block the node when CA tries to delete the last node with the volume replica.
|
||||
|
||||
### API changes
|
||||
|
||||
`None`
|
||||
|
||||
## Design
|
||||
|
||||
### Implementation Overview
|
||||
|
||||
#### Global setting
|
||||
- Add new global setting `Kubernetes Cluster Autoscaler Enabled (Experimental)`.
|
||||
- The setting is `boolean`.
|
||||
- The default value is `false`.
|
||||
|
||||
#### Annotations
|
||||
|
||||
When setting `kubernetes-cluster-autoscaler-enabled` is `true`, Longhorn will add annotation `cluster-autoscaler.kubernetes.io/safe-to-evict` for the following pods:
|
||||
- The engine and replica instance-manager pods because those are not backed by a controller and use local storage mounts.
|
||||
- The deployment workloads are managed by the longhorn manager and using any local storage mount. The managed components are labeled with `longhorn.io/managed-by: longhorn-manager`.
|
||||
|
||||
#### PodDisruptionBudget
|
||||
|
||||
- No change to the logic to cleanup PDB if instance-manager doesn't exist.
|
||||
|
||||
- Engine IM PDB:
|
||||
- Delete PDB if volumes are detached;
|
||||
- There is no instance process in IM (im.Status.Instance).
|
||||
- The same logic applies when a node is un-schedulable. Node is un-schedulable when marked in spec or with CA tainted `ToBeDeletedByClusterAutoscaler`;
|
||||
- Create PDB if volumes are attached; there are instance processes in IM (im.Status.Instance).
|
||||
|
||||
- Replica IM PDB:
|
||||
- Delete PDB if setting `allow-node-drain-with-last-healthy-replica` is enabled.
|
||||
- Delete PDB if volumes are detached;
|
||||
- There is no instance process in IM (im.Status.Instance)
|
||||
- There are other schedulable nodes with healthy volume replica and have replica IM PDB.
|
||||
- Delete PDB when a node is un-schedulable. Node is un-schedulable when marked in spec or with CA tainted `ToBeDeletedByClusterAutoscaler`;
|
||||
- Check if the condition is met to delete PDB (same check as to when volumes are detached).
|
||||
- Enqueue the replica instance-manager of another schedulable node with the volume replica.
|
||||
- Delete PDB.
|
||||
- Create PDB if volumes are attached:
|
||||
- There are instance processes in IM (im.Status.Instance).
|
||||
- Create PDB when volumes are detached;
|
||||
- There is no instance process in IM (im.Status.Instance)
|
||||
- The replica has been started. There are no other schedulable nodes with healthy volume replica and have replica IM PDB.
|
||||
|
||||
### Test plan
|
||||
|
||||
#### Scenario: test CA
|
||||
|
||||
Given Cluster with Kubernetes cluster-autoscaler.
|
||||
And Longhorn installed.
|
||||
And Set `kubernetes-cluster-autoscaler-enabled` to `true`.
|
||||
And Create deployment with cpu request.
|
||||
```
|
||||
resources:
|
||||
limits:
|
||||
cpu: 300m
|
||||
memory: 30Mi
|
||||
requests:
|
||||
cpu: 150m
|
||||
memory: 15Mi
|
||||
```
|
||||
|
||||
When Trigger CA to scale-up by increase deployment replicas.
|
||||
(double the node number, not including host node)
|
||||
```
|
||||
10 * math.ceil(allocatable_millicpu/cpu_request*node_number/10)
|
||||
```
|
||||
Then Cluster should have double the node number.
|
||||
|
||||
When Trigger CA to scale-down by decrease deployment replicas.
|
||||
(original node number)
|
||||
Then Cluster should have original node number.
|
||||
|
||||
#### Scenario: test CA scale down all nodes containing volume replicas
|
||||
|
||||
Given Cluster with Kubernetes cluster-autoscaler.
|
||||
And Longhorn installed.
|
||||
And Set `kubernetes-cluster-autoscaler-enabled` to `true`.
|
||||
And Create volume.
|
||||
And Attach the volume.
|
||||
And Write some data to volume.
|
||||
And Detach the volume.
|
||||
And Create deployment with cpu request.
|
||||
|
||||
When Trigger CA to scale-up by increase deployment replicas.
|
||||
(double the node number, not including host node)
|
||||
Then Cluster should have double the node number.
|
||||
|
||||
When Annotate new nodes with `cluster-autoscaler.kubernetes.io/scale-down-disabled`.
|
||||
(this ensures scale-down only the old nodes)
|
||||
And Trigger CA to scale-down by decrease deployment replicas.
|
||||
(original node number)
|
||||
Then Cluster should have original node number + 1 blocked node.
|
||||
|
||||
When Attach the volume to a new node. This triggers replica rebuild.
|
||||
And Volume data should be the same.
|
||||
And Detach the volume.
|
||||
Then Cluster should have original node number.
|
||||
And Volume data should be the same.
|
||||
|
||||
#### Scenario: test CA should block scale down of node running backing image manager pod
|
||||
|
||||
Similar to `Scenario: test CA scale down all nodes containing volume replicas`.
|
||||
|
||||
### Upgrade strategy
|
||||
|
||||
`N/A`
|
||||
|
||||
## Note [optional]
|
||||
|
||||
`N/A`
|
162
enhancements/20220420-longhorn-snapshot-crd.md
Normal file
162
enhancements/20220420-longhorn-snapshot-crd.md
Normal file
@ -0,0 +1,162 @@
|
||||
|
||||
# Longhorn Snapshot CRD
|
||||
|
||||
## Summary
|
||||
|
||||
Supporting Longhorn snapshot CRD allows users to query/create/delete volume snapshots using kubectl. This is one step closer to making kubectl as Longhorn CLI. Also, this will be a building block for the future auto-attachment/auto-detachment refactoring for snapshot creation, deletion, volume cloning.
|
||||
|
||||
### Related Issues
|
||||
|
||||
https://github.com/longhorn/longhorn/issues/3144
|
||||
|
||||
## Motivation
|
||||
|
||||
### Goals
|
||||
|
||||
1. Support Longhorn snapshot CRD to allow users to query/create/delete volume snapshots using kubectl.
|
||||
2. A building block for the future auto-attachment/auto-detachment refactoring for snapshot creation, deletion, volume cloning.
|
||||
3. Pay attention to scalability problem. A cluster with 1k volumes might have 30k snapshots. We should make sure not to overload the controller work-queue as well as making too many grpc calls to engine processes.
|
||||
|
||||
## Proposal
|
||||
|
||||
Introduce a new CRD, snapshot CRD and the snapshot controller. The life cycle of a snapshot CR is as below:
|
||||
1. Create (by engine monitor/kubectl)
|
||||
1. When user create a new snapshot CR, Longhorn try to create a new snapshot
|
||||
2. When there is a snapshot in the volume that isn't corresponding to any snapshot CR, Longhorn will generate snapshot CR for that snapshot
|
||||
2. Update (by snapshot controller)
|
||||
1. Snapshot controller will reconcile the snapshot CR status with the snapshot info inside the volume engine
|
||||
3. Delete (by engine monitor/kubectl)
|
||||
1. When a snapshot CR is deleted (by user or by Longhorn), snapshot controller will make sure that the snapshot are removed from the engine before remove the finalizer and allow the deletion
|
||||
2. Deleting volume should be blocked until all of its snapshot are removed
|
||||
3. When there is a system generated snapshot CR that isn't corresponding to any snapshot info inside engine status, Longhorn will delete the snapshot CR
|
||||
|
||||
### User Stories
|
||||
|
||||
Before this enhancement, users have to use Longhorn UI to query/create/delete volume snapshot. For user with only access to CLI, another option is to use our [Python client](https://longhorn.io/docs/1.2.4/references/longhorn-client-python/). However, the Python client are not as intuitive and easy as using kubectl.
|
||||
|
||||
After this enhancement, users will be able to use kubectl to query/create/delete Longhorn snapshots just like what they can do with Longhorn backups. There is no additional requirement for users to use this feature.
|
||||
|
||||
The experience details should be in the `User Experience In Detail` later.
|
||||
|
||||
#### Story 1
|
||||
User wants to limit the snapshot count to save space. Snapshot RecurringJobs set to Retain X number of snapshots do not touch unrelated snapshots, so if one ever changes the name of the RecurringJob, the old snapshots will stick around forever. These then have to be manually deleted in the UI. There might be some kind of browser automation framework might also work for pruning large numbers of snapshots, but this feels janky. Having a CRD for snapshots would greatly simplify this, as one could prune snapshots using kubectl, much like how one can currently manage backups using kubectl due to the existence of the `backups.longhorn.io` CRD.
|
||||
|
||||
### User Experience In Detail
|
||||
|
||||
There is no additional requirement for users to use this feature.
|
||||
|
||||
### API changes
|
||||
|
||||
We don't want to have disruptive changes in this initial version of snapshot CR (e.g., snapshot API create/delete shouldn't change. Snapshot status is still inside the engine status).
|
||||
|
||||
We can wait for the snapshot CRD to be a bit more mature (no issue with scalability) and make the disruptive changes in the next version of snapshot CR (e.g., snapshot API create/delete changes to create/delete snapshot CRs. Snapshot status is removed from inside the engine status)
|
||||
|
||||
## Design
|
||||
|
||||
### Implementation Overview
|
||||
|
||||
Introduce a new CRD, snapshot CRD and the snapshot controller.
|
||||
The snapshot CRD is:
|
||||
|
||||
```yaml
|
||||
// SnapshotSpec defines the desired state of Longhorn Snapshot
|
||||
type SnapshotSpec struct {
|
||||
// the volume that this snapshot belongs to.
|
||||
// This field is immutable after creation.
|
||||
// Required
|
||||
Volume string `json:"volume"`
|
||||
// require creating a new snapshot
|
||||
// +optional
|
||||
CreateSnapshot bool `json:"createSnapshot"`
|
||||
// The labels of snapshot
|
||||
// +optional
|
||||
// +nullable
|
||||
Labels map[string]string `json:"labels"`
|
||||
}
|
||||
|
||||
// SnapshotStatus defines the observed state of Longhorn Snapshot
|
||||
type SnapshotStatus struct {
|
||||
// +optional
|
||||
Parent string `json:"parent"`
|
||||
// +optional
|
||||
// +nullable
|
||||
Children map[string]bool `json:"children"`
|
||||
// +optional
|
||||
MarkRemoved bool `json:"markRemoved"`
|
||||
// +optional
|
||||
UserCreated bool `json:"userCreated"`
|
||||
// +optional
|
||||
CreationTime string `json:"creationTime"`
|
||||
// +optional
|
||||
Size int64 `json:"size"`
|
||||
// +optional
|
||||
// +nullable
|
||||
Labels map[string]string `json:"labels"`
|
||||
// +optional
|
||||
OwnerID string `json:"ownerID"`
|
||||
// +optional
|
||||
Error string `json:"error,omitempty"`
|
||||
// +optional
|
||||
RestoreSize int64 `json:"restoreSize"`
|
||||
// +optional
|
||||
ReadyToUse bool `json:"readyToUse"`
|
||||
}
|
||||
```
|
||||
The life cycle of a snapshot CR is as below:
|
||||
|
||||
1. **Create**
|
||||
1. When a snapshot CR is created, Longhorn mutation webhook will:
|
||||
1. Add a volume label `longhornvolume: <VOLUME-NAME>` to the snapshot CR. This allow us to efficiently find snapshots corresponding to a volume without having listing potentially thoundsands of snapshots.
|
||||
1. Add `longhornFinalizerKey` to snapshot CR to prevent it from being removed before Longhorn has change to clean up the corresponding snapshot
|
||||
1. Populate the value for `snapshot.OwnerReferences` to uniquely identify the volume of this snapshot. This field contains the volume UID to uniquely identify the volume in case the old volume was deleted and a new volume was created with the same name.
|
||||
2. For user created snapshot CR, the field `Spec.CreateSnapshot` should be set to `true` indicating that Longhorn should provision a new snapshot for this CR.
|
||||
1. Longhorn snapshot controller will pick up this CR, check to see if there already is a snapshot inside the `engine.Status.Snapshots`.
|
||||
1. If there is there already a snapshot inside engine.Status.Snapshots, update the snapshot.Status with the snapshot info inside `engine.Status.Snapshots`
|
||||
2. If there isn't a snapshot inside `engine.Status.Snapshots` then:
|
||||
1. making a call to engine process to check if there already a snapshot with the same name. This is to make sure we don't accidentally create 2 snapshots with the same name. This logic can be remove after [the issue](https://github.com/longhorn/longhorn/issues/3844) is resolved
|
||||
1. If the snapshot doesn't inside the engine process, make another call to create the snapshot
|
||||
3. For the snapshots that are already exist inside `engine.Status.Snapshots` but doesn't have corresponding snapshot CRs (i.e., system generated snapshots), the engine monitoring will generate snapshot CRs for them. The snapshot CR generated by engine monitoring with have `Spec.CreateSnapshot` set to `false`, Longhorn snapshot controller will not create a snapshot for those CRs. The snapshot controller only sync status for those snapshot CRs
|
||||
2. **Update**
|
||||
1. Snapshot CR spec and label are immutable after creation. It will be protected by the admission webhook
|
||||
2. Sync the snapshot info from `engine.Status.Snapshots` to the `snapshot.Status`.
|
||||
3. If there is any error or if the snapshot is marked as removed, set `snapshot.Status.ReadyToUse` to `false`
|
||||
4. If there there is no snapshot info inside `engine.Status.Snapshots`, mark the `snapshot.Status.ReadyToUse` to `false`and populate the `snapshot.Status.Error` with the lost message. This snapshot will eventually be updated again when engine monitoring update `engine.Status.Snapshots` or it may be cleanup as the section below
|
||||
4. **Delete**
|
||||
1. Engine monitor will responsible for removing all snapshot CRs that don't have a matching snapshot info and are in one of the following cases:
|
||||
1. The snapshot CRs with `Spec.CreateSnapshot: false` (snapshot CR that is auto generated by the engine monitoring)
|
||||
2. The snapshot CRs with `Spec.CreateSnapshot: true` and `snapCR.Status.CreationTime != nil` (snapshot CR that has requested a new snapshot and the snapshot has already provisioned before but no longer exist now)
|
||||
2. When a snapshot CR has deletion timestamp set, snapshot controller will:
|
||||
1. Check to see if the actual snapshot inside engine process exist.
|
||||
1. If it exist do:
|
||||
1. if has not been marked as removed, issue grpc call to engine process to remove the snapshot
|
||||
2. Check if the engine is in the purging state, if not issue a snapshot purge call to engine process
|
||||
2. If it doesn't exist, remove the `longhornFinalizerKey` to allow the deletion of the snapshot CR
|
||||
|
||||
### Test plan
|
||||
|
||||
Integration test plan.
|
||||
|
||||
For engine enhancement, also requires engine integration test plan.
|
||||
|
||||
### Upgrade strategy
|
||||
|
||||
Anything that requires if user want to upgrade to this enhancement
|
||||
|
||||
## Note [optional]
|
||||
|
||||
How do we address scalability issue?
|
||||
1. Controller workqueue
|
||||
1. Disable resync period for snapshot informer
|
||||
1. Enqueue snapshot only when:
|
||||
1. There is a change in snapshot CR
|
||||
1. There is a change in `engine.Status.CurrentState` (volume attach/detach event), `engine.Status.PurgeStatus` (for snapshot deletion event), `engine.Status.Snapshots` (for snapshot creation/update event)
|
||||
1. This enhancement proposal doesn't make additional call to engine process comparing to the existing design.
|
||||
|
||||
## Todo
|
||||
|
||||
For the special snapshot `volume-head`, we don't create a snapshot CR for this special snapshot because:
|
||||
1. From the usecase perspective, user cannot delete this snapshot anyway so there is no need to generate this snapshot
|
||||
1. The name `volume-head` is not globally uniquely, we might have to include volume name if we want to generate this snapshot CR
|
||||
1. We would have to implement special logic to prevent user from deleting this special CR
|
||||
1. On the flip side, if we generate this special CR, user will have a complete picture of the snapshot chain
|
||||
2. The VolumeHead CR may suddenly point to another actual file during the snapshot creation.
|
323
enhancements/20220428-storage-network-through-grpc-proxy.md
Normal file
323
enhancements/20220428-storage-network-through-grpc-proxy.md
Normal file
@ -0,0 +1,323 @@
|
||||
# Storage Network Through gRPC Proxy
|
||||
|
||||
## Summary
|
||||
|
||||
Currently, Longhorn uses the Kubernetes cluster CNI network and share the network with the entire cluster resources. This makes network availability impossible to control.
|
||||
|
||||
We would like to have a global `Storage Network` setting to allow users to input an existing Multus `NetworkAttachmentDefinition` CR network in `<namespace>/<name>` format. Longhorn can use the storage network for in-cluster data traffics.
|
||||
|
||||
The segregation can achieve by replacing the engine binary calls in the Longhorn manager with gRPC connections to the instance manager. Then the instance manager will be responsible for handling the requests between the management network and storage network.
|
||||
|
||||
---
|
||||
**_NOTE:_** There are other possible approaches we have considered to segregating the networks:
|
||||
|
||||
- Add Longhorn Manager to the storage network. The Manager needs to restart itself to get the secondary storage network IP, and there is no storage network segregation to the Longhorn data plane (engine & replica).
|
||||
|
||||
- Provide Engine/Replica with dual IPs. Code change around this approach is confusing and likely to increase maintenance complexity.
|
||||
---
|
||||
|
||||
### Related Issues
|
||||
|
||||
https://github.com/longhorn/longhorn/issues/2285
|
||||
|
||||
https://github.com/longhorn/longhorn/issues/3546
|
||||
|
||||
## Motivation
|
||||
|
||||
### Goals
|
||||
|
||||
- Have a new `Storage Network` setting.
|
||||
|
||||
- Replace Manager engine binary calls with gRPC client to the instance manager.
|
||||
|
||||
- Keep using the management network for the communication between Manager and Instance Manager.
|
||||
|
||||
- Use the storage network for the data traffic of data plane components to the instance processes. Those are the engines and replicas in Instance Manager pods.
|
||||
|
||||
- Support backward compatibility of the communication between the new Manager and the old Instance Manager after the upgrade. Ensure existing engine/replicas work without issues.
|
||||
|
||||
### Non-goals [optional]
|
||||
|
||||
- Setup and configure the Multus `NetworkAttachmentDefinition` CRs.
|
||||
|
||||
- Monitor for `NetworkAttachmentDefintition` CRs. The user needs to ensure the traffic is reachable between pods and across different nodes. Without monitoring, Longhorn will not get notified of the update of the `NetworkAttachmentDefinition` CRs. Thus the user should create a new `NetworkAttachmentDefinition` CR and update the `storage-network` setting.
|
||||
|
||||
- Out-cluster data traffic. For example, backing image upload and download.
|
||||
|
||||
|
||||
## Proposal
|
||||
|
||||
### Communication between Manager and Engine/Replica processes via Instance Manager gRPC proxy
|
||||
|
||||
- Introduce a new gRPC server in Instance Manager.
|
||||
|
||||
- Keep reusable connections between Manager and Instance Managers.
|
||||
|
||||
- Allow Manager to fall back to engine binary call when communicating with old Instance Manager.
|
||||
|
||||
### Storage Network
|
||||
|
||||
- Add a new `Storage Network` global setting.
|
||||
|
||||
- Add `k8s.v1.cni.cncf.io/networks` annotation to pods that involve data transfer. The annotation will use the value from the storage network setting. Multus will attach a secondary network to pods with this annotation.
|
||||
- Engine instance manager pods
|
||||
- Replica instance manager pods
|
||||
- Backing image data source pods. Data traffic between replicas and backing image data source.
|
||||
- Backing image manager pods. Data traffic in-between backing image managers.
|
||||
|
||||
- Add new `storageIP` to `Engine`, `Replica` and `BackingImageManager` CRD status. The storage IP will be use to communicate to the instance processes.
|
||||
|
||||
### User Stories
|
||||
|
||||
#### Story 1 - set up the storage network
|
||||
|
||||
As a Longhorn user / System administrator.
|
||||
|
||||
I have set up Multus `NetworkAttachmentDefinition` for additional network management.
|
||||
And I want to segregate Longhorn in-cluster data traffic with an additional network interface.
|
||||
Longhorn should provide a setting to input the `NetworkAttachmentDefinition` CR name for the storage network.
|
||||
|
||||
So I can guarantee network availability for Longhorn in-cluster data traffic.
|
||||
|
||||
|
||||
#### Story 2 - upgrade
|
||||
|
||||
As a Longhorn user / System administrator.
|
||||
|
||||
When I upgrade Longhorn, the changes should support existing attached volumes.
|
||||
|
||||
So I can decide when to upgrade the Engine Image.
|
||||
|
||||
|
||||
### User Experience In Detail
|
||||
|
||||
#### Story 1 - set up the storage network
|
||||
|
||||
1. I have a Kubernetes cluster with Multus installed.
|
||||
1. I created `NetworkAttachmentDefinition` CR and ensured the configuration is correct.
|
||||
1. I Added `<namespace>/<NetworkAttachmentDefinition name>` to Longhorn `Storage Network` setting.
|
||||
1. I see setting update failed when volumes are attached.
|
||||
1. I detach all volumes.
|
||||
1. When updating the setting I see engine/replica instance manager pod and backing image manager pods is restarted.
|
||||
1. I attach the volumes.
|
||||
1. I describe Engine, Replica, and BackingImageManager, and see the `storageIP` in CR status is in the range of the `NetworkAttachmentDefinition` subnet/CIDR. I also see the `storageIP` is different from the `ip` in CR status.
|
||||
1. I describe the Engine and see the `replicaAddressMap` in CR spec and status is using the storage IP.
|
||||
1. I see pod logs indicate the network directions.
|
||||
|
||||
#### Story 2 - upgrade
|
||||
|
||||
1. I Longhorn v1.2.4 cluster.
|
||||
1. I have healthy volumes attached.
|
||||
1. I upgrade Longhorn.
|
||||
1. I see volumes still attached and healthy with available engine image upgrade.
|
||||
1. I cannot upgrade the volume engine image with the volume attached.
|
||||
1. After I detach the volume, I can upgrade its engine image.
|
||||
1. I attached the volumes.
|
||||
1. I see the volumes are healthy.
|
||||
|
||||
### API changes
|
||||
|
||||
- The new global setting `Storage Network` will use the existing `/v1/settings` API.
|
||||
|
||||
## Design
|
||||
|
||||
### Overview gRPC Proxy Implementation
|
||||
|
||||
#### Instance Manager
|
||||
|
||||
- Start the gRPC proxy server with the next port to the process server. The default should be `localhost:8501`.
|
||||
- The gRPC proxy service shares the same `imrpc` package name as the process server.
|
||||
```
|
||||
Ping
|
||||
|
||||
ServerVersionGet
|
||||
|
||||
VolumeGet
|
||||
VolumeExpand
|
||||
VolumeFrontendStart
|
||||
VolumeFrontendShutdown
|
||||
|
||||
VolumeSnapshot
|
||||
SnapshotList
|
||||
SnapshotRevert
|
||||
SnapshotPurge
|
||||
SnapshotPurgeStatus
|
||||
SnapshotClone
|
||||
SnapshotCloneStatus
|
||||
SnapshotRemove
|
||||
|
||||
SnapshotBackup
|
||||
SnapshotBackupStatus
|
||||
BackupRestore
|
||||
BackupRestoreStatus
|
||||
BackupVolumeList
|
||||
BackupVolumeGet
|
||||
BackupGet
|
||||
BackupConfigMetaGet
|
||||
BackupRemove
|
||||
|
||||
ReplicaAdd
|
||||
ReplicaList
|
||||
ReplicaRebuildingStatus
|
||||
ReplicaVerifyRebuild
|
||||
ReplicaRemove
|
||||
```
|
||||
|
||||
#### Manager
|
||||
|
||||
- Create a _proxyHandler_ object to map the controller ID to an _EngineClient_ interface. The _proxyHandler_ object is shared between controllers.
|
||||
|
||||
- The Instance Manager Controller is responsible for the life cycle of the proxy gRPC client. For every enqueue:
|
||||
- Check for the existing gRPC client in the _proxyHandler_, and check the connection liveness with the `Ping` request.
|
||||
- If the proxy gRPC client connection is dead, stop the proxy gRPC client and error so it will re-queue.
|
||||
- If the proxy gRPC client doesn't exist in the _proxyHandler_, start a new gRPC connection and map it to the current controller ID.
|
||||
- Do not create the proxy gRPC connection when the instance manager version is less than the current version. We will provide the fallback interface caller provided when getting the client.
|
||||
|
||||
- The gRPC client will use the _EngineClient_ interface.
|
||||
- Provide a fallback interface caller when getting the gRPC client from the _proxyHandler_. The fallback callers are:
|
||||
- the existing `Engine` client used for the binary call
|
||||
- `BackupTargetClient`.
|
||||
- Use the fallback caller when the instance manager version is less than the current version.
|
||||
- Add new `BackupTargetBinaryClient` interface for fallback.
|
||||
```
|
||||
type BackupTargetBinaryClient interface {
|
||||
BackupGet(destURL string, credential map[string]string) (*Backup, error)
|
||||
BackupVolumeGet(destURL string, credential map[string]string) (volume *BackupVolume, err error)
|
||||
BackupNameList(destURL, volumeName string, credential map[string]string) (names []string, err error)
|
||||
BackupVolumeNameList(destURL string, credential map[string]string) (names []string, err error)
|
||||
BackupDelete(destURL string, credential map[string]string) (err error)
|
||||
BackupVolumeDelete(destURL, volumeName string, credential map[string]string) (err error)
|
||||
BackupConfigMetaGet(destURL string, credential map[string]string) (*ConfigMetadata, error)
|
||||
}
|
||||
```
|
||||
- Introduce A new `EngineClientProxy` interface for the Proxy, which includes proxy-specific methods and implementation of the existing `EnglineClient` and `BackupTargetClient` interfaces. This will be adaptive when using the EngineClient interface for the proxy or non-proxy/fallback operations.
|
||||
```
|
||||
type EngineClientProxy interface {
|
||||
EngineClient
|
||||
BackupTargetBinaryClient
|
||||
|
||||
IsGRPC() bool
|
||||
Start(*longhorn.InstanceManager, logrus.FieldLogger, *datastore.DataStore) error
|
||||
Stop(*longhorn.InstanceManager) error
|
||||
Ping() error
|
||||
}
|
||||
```
|
||||
|
||||
### Overview Storage Network Overview Implementation
|
||||
|
||||
#### Setting
|
||||
|
||||
Add a new global setting `Storage Network`.
|
||||
- The setting is `string`.
|
||||
- The default value is `""`.
|
||||
- The setting should be in the `danger zone` category.
|
||||
- The setting will be validated at admission webhook setting validator.
|
||||
- The setting should be in the form of `< NAMESPACE>/<NETWORK-ATTACHMENT-DEFINITION-NAME>`.
|
||||
- The setting cannot be updated when volumes are attached.
|
||||
|
||||
#### CRD
|
||||
|
||||
Engine:
|
||||
- New `storageIP` in status.
|
||||
- Use the replica `status.storageIP` instead of the replica `status.IP` for the replicaAddressMap.
|
||||
|
||||
Replica:
|
||||
- New `storageIP` in status.
|
||||
|
||||
BackingImageManager:
|
||||
- New `storageIP` in status.
|
||||
|
||||
#### Instance Manager Controller
|
||||
|
||||
1. When creating instance manager pods, add `k8s.v1.cni.cncf.io/networks` annotation with `lhnet1` as interface name. Use the `storage-network` setting value for the namespace and name.
|
||||
```
|
||||
k8s.v1.cni.cncf.io/networks: '
|
||||
[
|
||||
{
|
||||
"namespace": "kube-system",
|
||||
"name": "demo-10-30-0-0",
|
||||
"interface": "lhnet1"
|
||||
}
|
||||
]
|
||||
'
|
||||
```
|
||||
|
||||
#### Instance Handler
|
||||
|
||||
1. Get the IP from instance manager Pod annotation `k8s.v1.cni.cncf.io/network-status`. Use the IP for `Engine` and `Replica` Storage IP. When the `storage-network` setting is empty, The Storage IP will be the pod IP.
|
||||
|
||||
#### Backing Image Manager Controller
|
||||
|
||||
1. When creating backing image manager pods, add `k8s.v1.cni.cncf.io/networks` annotation with `lhnet1` as interface name. Use the `storage-network` setting value for the namespace and name.
|
||||
```
|
||||
k8s.v1.cni.cncf.io/networks: '
|
||||
[
|
||||
{
|
||||
"namespace": "kube-system",
|
||||
"name": "demo-10-30-0-0",
|
||||
"interface": "lhnet1"
|
||||
}
|
||||
]
|
||||
'
|
||||
```
|
||||
1. Get the IP from backing image manager Pod annotation `k8s.v1.cni.cncf.io/network-status`. Use the IP for `BackingImageManager` Storage IP. When the `storage-network` setting is empty, The Storage IP will be the pod IP.
|
||||
|
||||
#### Backing Image Data Source Controller
|
||||
|
||||
1. When creating backing image data source pods, add `k8s.v1.cni.cncf.io/networks` annotation with `lhnet1` as interface name. Use the `storage-network` setting value for the namespace and name.
|
||||
```
|
||||
k8s.v1.cni.cncf.io/networks: '
|
||||
[
|
||||
{
|
||||
"namespace": "kube-system",
|
||||
"name": "demo-10-30-0-0",
|
||||
"interface": "lhnet1"
|
||||
}
|
||||
]
|
||||
'
|
||||
```
|
||||
|
||||
#### Backing Image Manager - Export From volume
|
||||
|
||||
1. get the IPv4 of the `lhnet1` interface and use it as the receiver address. Use the pod IP if the interface doesn't exist.
|
||||
|
||||
#### Setting Controller
|
||||
|
||||
1. Do not update the `storage-network` setting and return an error when `Volumes` are attached.
|
||||
1. Delete all backing image manager pods.
|
||||
1. Delete all instance manager pods.
|
||||
|
||||
|
||||
### Test plan
|
||||
|
||||
#### CI Pipeline
|
||||
|
||||
All existing tests should pass when the cluster has the storage network configured. We should consider having a new test pipeline for the storage network.
|
||||
|
||||
Infra Prerequisites:
|
||||
- Secondary network interface added to each cluster instance.
|
||||
- Multus deployed.
|
||||
- Network-attachment-definition created.
|
||||
- Routing is configured in all cluster nodes to ensure the network is accessible between instances.
|
||||
- For AWS, disable network source/destination checks for each cloud-provider instance.
|
||||
|
||||
#### Test storage-network setting
|
||||
|
||||
Scenario: `Engine`, `Replica` and `BackingImageManager` should use IP in `storage-network` `NetworkAttachmentDefinition` subnet/CIDR range after setting update.
|
||||
|
||||
### Upgrade strategy
|
||||
|
||||
[Some old instance manager pods are still running after upgrade](https://longhorn.io/kb/troubleshooting-some-old-instance-manager-pods-are-still-running-after-upgrade/).
|
||||
Old engine instance managers do not have the gRPC proxy server for Manager to communicate.
|
||||
Hence, we need to support backward compatibility.
|
||||
|
||||
Manager communication:
|
||||
- Bump instance manager API version.
|
||||
- Manager checks for incompatible version and fall back to requests through the engine binary.
|
||||
|
||||
Volume/Engine live upgrade:
|
||||
- Keep live upgrade. This will be a soft notice for users to know we will not enforce any change in 1.3, but it will happen in 1.4.
|
||||
|
||||
## Note [optional]
|
||||
|
||||
`None`
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user