Compare commits

...

40 Commits

Author SHA1 Message Date
davidko
a6e47288d1 release: v1.5.2-rc3
Signed-off-by: davidko <dko@suse.com>
2023-10-31 19:40:11 +08:00
davidko
e976f7f4b0 release: v1.5.2-rc2
Signed-off-by: davidko <dko@suse.com>
2023-10-29 23:50:12 +08:00
davidko
a5a567d738 release: v1.5.2-rc1
Signed-off-by: davidko <dko@suse.com>
2023-10-27 13:36:46 +08:00
James Lu
3a893cf09f fix: typos in enhancements.
Fix some code spell check errors.

Signed-off-by: James Lu <jamesluhz@gmail.com>
2023-10-23 16:44:55 +08:00
arlan lloyd
8f4c3eb8d3 add conditional
Signed-off-by: arlan lloyd <arlanlloyd@gmail.com>
(cherry picked from commit febfa7eef7)
2023-10-23 16:44:55 +08:00
Phan Le
4087206819 Fix bug: check script fails to perform all checks
When piping the script to bash (cat ./environment_check.sh | bash), the
part after `kubectl exec -i` will be interpreted as the input for the
command inside kubectl exec command. As the result, the env check script
doesn't perform the steps after that kubectl exec command. Removing the
`-i` flag fixed the issue.

Also, replacing `kubectl exec -t` by `kubectl exec` because the input of
kubectl exec command is not a terminal device

longhorn-5653

Signed-off-by: Phan Le <phan.le@suse.com>
2023-10-19 21:41:55 +08:00
Phan Le
24e7f7f10a Add kernel release check to environment_check.sh
longhorn-6854

Signed-off-by: Phan Le <phan.le@suse.com>
(cherry picked from commit d30a970ea8)
2023-10-11 17:14:09 -07:00
James Munson
de3e168d2c Add nfsOptions parameter to sample storageclass.yaml
Signed-off-by: James Munson <james.munson@suse.com>
(cherry picked from commit c0a258afef)
2023-09-23 00:30:12 +08:00
Chin-Ya Huang
9f576be79a task: use head images for security scan
ref: 6737

Signed-off-by: Chin-Ya Huang <chin-ya.huang@suse.com>
2023-09-21 08:34:56 +08:00
James Munson
a11da25f9b Fix some small errors on StorageClass NodeSelector.
Signed-off-by: James Munson <james.munson@suse.com>
2023-09-06 14:34:26 -07:00
Chin-Ya Huang
e1b00ad2d1 feat(support-bundle): version bump
ref: 6544

Signed-off-by: Chin-Ya Huang <chin-ya.huang@suse.com>
(cherry picked from commit 914fb89687)
2023-08-23 14:00:33 +08:00
Austin Heyne
33aa315e14 Add reserve storage percentage in helm chart
- Add the StorageReservedPercentageForDefaultDisk configuration to the
helm chart.

Signed-off-by: Austin Heyne <aheyne@ccri.com>
(cherry picked from commit fab23a27aa)
2023-08-11 18:44:10 +08:00
Yarden Shoham
751ed036d2 chart: Update settings based on the instance managers consolidation
- Add the setting added in https://github.com/longhorn/longhorn-manager/pull/1731 in the helm chart
- Related to https://github.com/longhorn/longhorn/issues/5208

Signed-off-by: Yarden Shoham <git@yardenshoham.com>
(cherry picked from commit 339e501042)
2023-08-07 17:27:16 +08:00
David Ko
19e8fefd3a release: 1.5.1
Signed-off-by: David Ko <dko@suse.com>
2023-07-19 18:58:18 +08:00
Derek Su
ab877fe501 chore(chart): remove webhooks and recovery-backend
Signed-off-by: Derek Su <derek.su@suse.com>
(cherry picked from commit f2c474e636)
2023-07-17 12:57:23 +08:00
David Ko
1278c9737f release: 1.5.1-rc1
Signed-off-by: David Ko <dko@suse.com>
2023-07-16 23:30:34 +08:00
Chin-Ya Huang
ef3a580104 chore(support-bundle): version bump
ref: 6256

Signed-off-by: Chin-Ya Huang <chin-ya.huang@suse.com>
(cherry picked from commit a43faae14a)
2023-07-12 10:57:42 +08:00
Chin-Ya Huang
3b7a875675 fix(chart): update default setting log level
ref: 6257

Signed-off-by: Chin-Ya Huang <chin-ya.huang@suse.com>
(cherry picked from commit 7ffd3512be)
2023-07-11 21:04:47 +08:00
David Ko
9e40b9db5d release: 1.5.0
Signed-off-by: David Ko <dko@suse.com>
2023-07-07 13:34:40 +08:00
Derek Su
271edc53be Fix indent
Signed-off-by: Derek Su <derek.su@suse.com>
(cherry picked from commit 6b56bb2b72)
2023-06-28 11:37:26 +08:00
Derek Su
e7aa5e6334 spdk: help install git before configuring spdk environment
Signed-off-by: Derek Su <derek.su@suse.com>
(cherry picked from commit 0d94b6e4cf)
2023-06-28 11:25:36 +08:00
Derek Su
e9310044ce Highlight CPU usage in v2-data-engine setting
Longhorn 6126

Signed-off-by: Derek Su <derek.su@suse.com>
(cherry picked from commit 46e1bb2cc3)
2023-06-28 11:25:36 +08:00
Phan Le
e462d7cdd1 Add volumeattachments resource to Longhorn ClusterRole
Longhorn-6197

Signed-off-by: Phan Le <phan.le@suse.com>
(cherry picked from commit a0879b8167)
2023-06-27 12:19:25 +08:00
David Ko
797a8e3308 release: 1.5.0-rc3
Signed-off-by: David Ko <dko@suse.com>
2023-06-26 19:51:25 +08:00
Chin-Ya Huang
9ce8ee65af feat(upgrade-responder): support requestSchema in setup script
ref: 5235

Signed-off-by: Chin-Ya Huang <chin-ya.huang@suse.com>
(cherry picked from commit 15db0882ae)
2023-06-26 16:19:56 +08:00
James Lu
47af77cee7 fix(deploy): remove error line in nfs backupstore
Remove a extra error line in backupstore/nfs-backupstore.yaml.

Signed-off-by: James Lu <james.lu@suse.com>
(cherry picked from commit c1d6d93374)
2023-06-21 11:41:01 +08:00
David Gaster
e651a6a368 ability to specify platform arch for air gap install
Signed-off-by: David Gaster <dngaster@gmail.com>
(cherry picked from commit a601ecc468)
2023-06-19 15:53:18 +08:00
Derek Su
428f1d54c1 Reduce BackupConcurrentLimit and RestoreConcurrentLimit to 2
Longhorn 6135

Signed-off-by: Derek Su <derek.su@suse.com>
(cherry picked from commit 27f482bd9b)
2023-06-16 17:31:57 +08:00
Derek Su
dfcfd76c9e Update examples
Longhorn 6126

Signed-off-by: Derek Su <derek.su@suse.com>
(cherry picked from commit 1bbefa8132)
2023-06-15 17:08:50 +08:00
Derek Su
214a37e450 Rename BackendStoreDrivers
Longhorn 6126

Signed-off-by: Derek Su <derek.su@suse.com>
(cherry picked from commit cdc6447b88)
2023-06-15 17:08:50 +08:00
David Ko
268bce4cf9 release: 1.5.0-rc2
Signed-off-by: David Ko <dko@suse.com>
2023-06-13 22:36:16 +08:00
Derek Su
ea30dc3dcc offline rebuilding/chart: add offline-replica-rebuilding setting
Longhorn 6071

Signed-off-by: Derek Su <derek.su@suse.com>
(cherry picked from commit b8069c547b)
2023-06-13 14:46:39 +08:00
Derek Su
c2d58ac6c9 offline rebuilding/chart: update crd.yaml
Longhorn 6071

Signed-off-by: Derek Su <derek.su@suse.com>
(cherry picked from commit 2ae85e8dcb)
2023-06-13 14:46:39 +08:00
Derek Su
1191925c82 spdk: nvme-cli should be equal to or greater than 1.12
go-spdk-helper can support nvme-cli v2.0+.

Signed-off-by: Derek Su <derek.su@suse.com>
(cherry picked from commit 975239ecc9)
2023-06-13 14:29:12 +08:00
Eric Weber
b8c0e27a12 Add iSCSI SELinux workaround for Fedora-like distributions
Signed-off-by: Eric Weber <eric.weber@suse.com>
(cherry picked from commit 34c07f3e5c)
2023-06-08 14:34:12 +08:00
Derek Su
c839128a9f spdk: nvme-cli should be between 1.12 and 1.16
Signed-off-by: Derek Su <derek.su@suse.com>
(cherry picked from commit 7cbb97100e)
2023-06-08 12:38:17 +08:00
Derek Su
11ec164f14 spdk: use 1024 MiB huge pages by default
Signed-off-by: Derek Su <derek.su@suse.com>
(cherry picked from commit fa04ba6d29)
2023-06-06 12:45:44 +08:00
Derek Su
227219229c spdk: update expected-nr-hugepages to 512 in environment_check.sh
Longhorn 5739

Signed-off-by: Derek Su <derek.su@suse.com>
(cherry picked from commit a5041e1cf3)
2023-06-06 12:24:00 +08:00
Tyler Hawkins
53d7909de8 fix: (chart) fix nodeDrainPolicy key
Removing a space between the key and colon.

Signed-off-by: Tyler Hawkins <3319104+tyzbit@users.noreply.github.com>
(cherry picked from commit e45a9c04f3)
2023-06-03 06:05:47 +08:00
David Ko
6b62e767b3 release: 1.5.0-rc1
Signed-off-by: David Ko <dko@suse.com>
2023-06-02 20:51:23 +08:00
21 changed files with 584 additions and 218 deletions

View File

@ -1,7 +1,7 @@
apiVersion: v1 apiVersion: v1
name: longhorn name: longhorn
version: 1.4.0-dev version: 1.5.2-rc3
appVersion: v1.4.0-dev appVersion: v1.5.2-rc3
kubeVersion: ">=1.21.0-0" kubeVersion: ">=1.21.0-0"
description: Longhorn is a distributed block storage system for Kubernetes. description: Longhorn is a distributed block storage system for Kubernetes.
keywords: keywords:

View File

@ -17,7 +17,7 @@ questions:
label: Longhorn Manager Image Repository label: Longhorn Manager Image Repository
group: "Longhorn Images Settings" group: "Longhorn Images Settings"
- variable: image.longhorn.manager.tag - variable: image.longhorn.manager.tag
default: master-head default: v1.5.2-rc3
description: "Specify Longhorn Manager Image Tag" description: "Specify Longhorn Manager Image Tag"
type: string type: string
label: Longhorn Manager Image Tag label: Longhorn Manager Image Tag
@ -29,7 +29,7 @@ questions:
label: Longhorn Engine Image Repository label: Longhorn Engine Image Repository
group: "Longhorn Images Settings" group: "Longhorn Images Settings"
- variable: image.longhorn.engine.tag - variable: image.longhorn.engine.tag
default: master-head default: v1.5.2-rc3
description: "Specify Longhorn Engine Image Tag" description: "Specify Longhorn Engine Image Tag"
type: string type: string
label: Longhorn Engine Image Tag label: Longhorn Engine Image Tag
@ -41,7 +41,7 @@ questions:
label: Longhorn UI Image Repository label: Longhorn UI Image Repository
group: "Longhorn Images Settings" group: "Longhorn Images Settings"
- variable: image.longhorn.ui.tag - variable: image.longhorn.ui.tag
default: master-head default: v1.5.2-rc3
description: "Specify Longhorn UI Image Tag" description: "Specify Longhorn UI Image Tag"
type: string type: string
label: Longhorn UI Image Tag label: Longhorn UI Image Tag
@ -53,7 +53,7 @@ questions:
label: Longhorn Instance Manager Image Repository label: Longhorn Instance Manager Image Repository
group: "Longhorn Images Settings" group: "Longhorn Images Settings"
- variable: image.longhorn.instanceManager.tag - variable: image.longhorn.instanceManager.tag
default: v2_20221123 default: v1.5.2-rc3
description: "Specify Longhorn Instance Manager Image Tag" description: "Specify Longhorn Instance Manager Image Tag"
type: string type: string
label: Longhorn Instance Manager Image Tag label: Longhorn Instance Manager Image Tag
@ -65,7 +65,7 @@ questions:
label: Longhorn Share Manager Image Repository label: Longhorn Share Manager Image Repository
group: "Longhorn Images Settings" group: "Longhorn Images Settings"
- variable: image.longhorn.shareManager.tag - variable: image.longhorn.shareManager.tag
default: v1_20220914 default: v1.5.2-rc3
description: "Specify Longhorn Share Manager Image Tag" description: "Specify Longhorn Share Manager Image Tag"
type: string type: string
label: Longhorn Share Manager Image Tag label: Longhorn Share Manager Image Tag
@ -77,7 +77,7 @@ questions:
label: Longhorn Backing Image Manager Image Repository label: Longhorn Backing Image Manager Image Repository
group: "Longhorn Images Settings" group: "Longhorn Images Settings"
- variable: image.longhorn.backingImageManager.tag - variable: image.longhorn.backingImageManager.tag
default: v3_20220808 default: v1.5.2-rc3
description: "Specify Longhorn Backing Image Manager Image Tag" description: "Specify Longhorn Backing Image Manager Image Tag"
type: string type: string
label: Longhorn Backing Image Manager Image Tag label: Longhorn Backing Image Manager Image Tag
@ -89,7 +89,7 @@ questions:
label: Longhorn Support Bundle Kit Image Repository label: Longhorn Support Bundle Kit Image Repository
group: "Longhorn Images Settings" group: "Longhorn Images Settings"
- variable: image.longhorn.supportBundleKit.tag - variable: image.longhorn.supportBundleKit.tag
default: v0.0.24 default: v0.0.27
description: "Specify Longhorn Support Bundle Manager Image Tag" description: "Specify Longhorn Support Bundle Manager Image Tag"
type: string type: string
label: Longhorn Support Bundle Kit Image Tag label: Longhorn Support Bundle Kit Image Tag
@ -327,6 +327,14 @@ The available volume spec options are:
min: 0 min: 0
max: 100 max: 100
default: 25 default: 25
- variable: defaultSettings.storageReservedPercentageForDefaultDisk
label: Storage Reserved Percentage For Default Disk
description: "The reserved percentage specifies the percentage of disk space that will not be allocated to the default disk on each new Longhorn node."
group: "Longhorn Default Settings"
type: int
min: 0
max: 100
default: 30
- variable: defaultSettings.upgradeChecker - variable: defaultSettings.upgradeChecker
label: Enable Upgrade Checker label: Enable Upgrade Checker
description: 'Upgrade Checker will check for new Longhorn version periodically. When there is a new version available, a notification will appear in the UI. By default true.' description: 'Upgrade Checker will check for new Longhorn version periodically. When there is a new version available, a notification will appear in the UI. By default true.'
@ -533,37 +541,19 @@ Set the value to **0** to disable backup restore."
type: int type: int
min: 0 min: 0
default: 300 default: 300
- variable: defaultSettings.guaranteedEngineManagerCPU - variable: defaultSettings.guaranteedInstanceManagerCPU
label: Guaranteed Engine Manager CPU label: Guaranteed Instance Manager CPU
description: "This integer value indicates how many percentage of the total allocatable CPU on each node will be reserved for each engine manager Pod. For example, 10 means 10% of the total CPU on a node will be allocated to each engine manager pod on this node. This will help maintain engine stability during high node workload. description: "This integer value indicates how many percentage of the total allocatable CPU on each node will be reserved for each instance manager Pod. For example, 10 means 10% of the total CPU on a node will be allocated to each instance manager pod on this node. This will help maintain engine and replica stability during high node workload.
In order to prevent unexpected volume engine crash as well as guarantee a relative acceptable IO performance, you can use the following formula to calculate a value for this setting: In order to prevent unexpected volume instance (engine/replica) crash as well as guarantee a relative acceptable IO performance, you can use the following formula to calculate a value for this setting:
Guaranteed Engine Manager CPU = The estimated max Longhorn volume engine count on a node * 0.1 / The total allocatable CPUs on the node * 100. `Guaranteed Instance Manager CPU = The estimated max Longhorn volume engine and replica count on a node * 0.1 / The total allocatable CPUs on the node * 100`
The result of above calculation doesn't mean that's the maximum CPU resources the Longhorn workloads require. To fully exploit the Longhorn volume I/O performance, you can allocate/guarantee more CPU resources via this setting. The result of above calculation doesn't mean that's the maximum CPU resources the Longhorn workloads require. To fully exploit the Longhorn volume I/O performance, you can allocate/guarantee more CPU resources via this setting.
If it's hard to estimate the usage now, you can leave it with the default value, which is 12%. Then you can tune it when there is no running workload using Longhorn volumes. If it's hard to estimate the usage now, you can leave it with the default value, which is 12%. Then you can tune it when there is no running workload using Longhorn volumes.
WARNING: WARNING:
- Value 0 means unsetting CPU requests for engine manager pods. - Value 0 means unsetting CPU requests for instance manager pods.
- Considering the possible new instance manager pods in the further system upgrade, this integer value is range from 0 to 40. And the sum with setting 'Guaranteed Engine Manager CPU' should not be greater than 40. - Considering the possible new instance manager pods in the further system upgrade, this integer value is range from 0 to 40.
- One more set of instance manager pods may need to be deployed when the Longhorn system is upgraded. If current available CPUs of the nodes are not enough for the new instance manager pods, you need to detach the volumes using the oldest instance manager pods so that Longhorn can clean up the old pods automatically and release the CPU resources. And the new pods with the latest instance manager image will be launched then. - One more set of instance manager pods may need to be deployed when the Longhorn system is upgraded. If current available CPUs of the nodes are not enough for the new instance manager pods, you need to detach the volumes using the oldest instance manager pods so that Longhorn can clean up the old pods automatically and release the CPU resources. And the new pods with the latest instance manager image will be launched then.
- This global setting will be ignored for a node if the field \"EngineManagerCPURequest\" on the node is set. - This global setting will be ignored for a node if the field \"InstanceManagerCPURequest\" on the node is set.
- After this setting is changed, all engine manager pods using this global setting on all the nodes will be automatically restarted. In other words, DO NOT CHANGE THIS SETTING WITH ATTACHED VOLUMES." - After this setting is changed, all instance manager pods using this global setting on all the nodes will be automatically restarted. In other words, DO NOT CHANGE THIS SETTING WITH ATTACHED VOLUMES."
group: "Longhorn Default Settings"
type: int
min: 0
max: 40
default: 12
- variable: defaultSettings.guaranteedReplicaManagerCPU
label: Guaranteed Replica Manager CPU
description: "This integer value indicates how many percentage of the total allocatable CPU on each node will be reserved for each replica manager Pod. 10 means 10% of the total CPU on a node will be allocated to each replica manager pod on this node. This will help maintain replica stability during high node workload.
In order to prevent unexpected volume replica crash as well as guarantee a relative acceptable IO performance, you can use the following formula to calculate a value for this setting:
Guaranteed Replica Manager CPU = The estimated max Longhorn volume replica count on a node * 0.1 / The total allocatable CPUs on the node * 100.
The result of above calculation doesn't mean that's the maximum CPU resources the Longhorn workloads require. To fully exploit the Longhorn volume I/O performance, you can allocate/guarantee more CPU resources via this setting.
If it's hard to estimate the usage now, you can leave it with the default value, which is 12%. Then you can tune it when there is no running workload using Longhorn volumes.
WARNING:
- Value 0 means unsetting CPU requests for replica manager pods.
- Considering the possible new instance manager pods in the further system upgrade, this integer value is range from 0 to 40. And the sum with setting 'Guaranteed Replica Manager CPU' should not be greater than 40.
- One more set of instance manager pods may need to be deployed when the Longhorn system is upgraded. If current available CPUs of the nodes are not enough for the new instance manager pods, you need to detach the volumes using the oldest instance manager pods so that Longhorn can clean up the old pods automatically and release the CPU resources. And the new pods with the latest instance manager image will be launched then.
- This global setting will be ignored for a node if the field \"ReplicaManagerCPURequest\" on the node is set.
- After this setting is changed, all replica manager pods using this global setting on all the nodes will be automatically restarted. In other words, DO NOT CHANGE THIS SETTING WITH ATTACHED VOLUMES."
group: "Longhorn Default Settings" group: "Longhorn Default Settings"
type: int type: int
min: 0 min: 0
@ -574,7 +564,7 @@ Set the value to **0** to disable backup restore."
description: "The log level Panic, Fatal, Error, Warn, Info, Debug, Trace used in longhorn manager. By default Debug." description: "The log level Panic, Fatal, Error, Warn, Info, Debug, Trace used in longhorn manager. By default Debug."
group: "Longhorn Default Settings" group: "Longhorn Default Settings"
type: string type: string
default: "Debug" default: "Info"
- variable: defaultSettings.kubernetesClusterAutoscalerEnabled - variable: defaultSettings.kubernetesClusterAutoscalerEnabled
label: Kubernetes Cluster Autoscaler Enabled (Experimental) label: Kubernetes Cluster Autoscaler Enabled (Experimental)
description: "Enabling this setting will notify Longhorn that the cluster is using Kubernetes Cluster Autoscaler. description: "Enabling this setting will notify Longhorn that the cluster is using Kubernetes Cluster Autoscaler.
@ -677,24 +667,34 @@ Set the value to **0** to disable backup restore."
group: "Longhorn Default Settings" group: "Longhorn Default Settings"
type: int type: int
min: 1 min: 1
default: 5 default: 2
- variable: defaultSettings.restoreConcurrentLimit - variable: defaultSettings.restoreConcurrentLimit
label: Restore Concurrent Limit Per Backup label: Restore Concurrent Limit Per Backup
description: "This setting controls how many worker threads per restore concurrently." description: "This setting controls how many worker threads per restore concurrently."
group: "Longhorn Default Settings" group: "Longhorn Default Settings"
type: int type: int
min: 1 min: 1
default: 5 default: 2
- variable: defaultSettings.spdk - variable: defaultSettings.v2DataEngine
label: Enable SPDK Data Engine (Preview Feature) label: V2 Data Engine
description: "This allows users to activate SPDK data engine. Currently, it is in the preview phase and should not be utilized in a production environment. description: "This allows users to activate v2 data engine based on SPDK. Currently, it is in the preview phase and should not be utilized in a production environment.
WARNING: WARNING:
- The cluster must have pre-existing Multus installed, and NetworkAttachmentDefinition IPs are reachable between nodes. - DO NOT CHANGE THIS SETTING WITH ATTACHED VOLUMES. Longhorn will block this setting update when there are attached volumes.
- DO NOT CHANGE THIS SETTING WITH ATTACHED VOLUMES. Longhorn will try to block this setting update when there are attached volumes. - When applying the setting, Longhorn will restart all instance-manager pods.
- When applying the setting, Longhorn will restart all instance-manager pods." - When the V2 Data Engine is enabled, each instance-manager pod utilizes 1 CPU core. This high CPU usage is attributed to the spdk_tgt process running within each instance-manager pod. The spdk_tgt process is responsible for handling input/output (IO) operations and requires intensive polling. As a result, it consumes 100% of a dedicated CPU core to efficiently manage and process the IO requests, ensuring optimal performance and responsiveness for storage operations."
group: "Longhorn SPDK Data Engine Settings" group: "Longhorn V2 Data Engine (Preview Feature) Settings"
type: boolean type: boolean
default: false default: false
- variable: defaultSettings.offlineReplicaRebuilding
label: Offline Replica Rebuilding
description: ""This setting allows users to enable the offline replica rebuilding for volumes using v2 data engine."
group: "Longhorn V2 Data Engine (Preview Feature) Settings"
required: true
type: enum
options:
- "enabled"
- "disabled"
default: "enabled"
- variable: persistence.defaultClass - variable: persistence.defaultClass
default: "true" default: "true"
description: "Set as default StorageClass for Longhorn" description: "Set as default StorageClass for Longhorn"
@ -743,18 +743,18 @@ Set the value to **0** to disable backup restore."
group: "Longhorn Storage Class Settings" group: "Longhorn Storage Class Settings"
type: string type: string
default: default:
- variable: defaultSettings.defaultNodeSelector.enable - variable: persistence.defaultNodeSelector.enable
description: "Enable recurring Node selector for Longhorn StorageClass" description: "Enable Node selector for Longhorn StorageClass"
group: "Longhorn Storage Class Settings" group: "Longhorn Storage Class Settings"
label: Enable Storage Class Node Selector label: Enable Storage Class Node Selector
type: boolean type: boolean
default: false default: false
show_subquestion_if: true show_subquestion_if: true
subquestions: subquestions:
- variable: defaultSettings.defaultNodeSelector.selector - variable: persistence.defaultNodeSelector.selector
label: Storage Class Node Selector label: Storage Class Node Selector
description: 'We use NodeSelector when we want to bind PVC via StorageClass into desired mountpoint on the nodes tagged whith its value' description: 'We use NodeSelector when we want to bind PVC via StorageClass into desired mountpoint on the nodes tagged with its value'
group: "Longhorn Default Settings" group: "Longhorn Storage Class Settings"
type: string type: string
default: default:
- variable: persistence.backingImage.enable - variable: persistence.backingImage.enable

View File

@ -1316,8 +1316,8 @@ spec:
type: boolean type: boolean
backendStoreDriver: backendStoreDriver:
enum: enum:
- longhorn - v1
- spdk - v2
type: string type: string
backupVolume: backupVolume:
type: string type: string
@ -2418,8 +2418,8 @@ spec:
type: boolean type: boolean
backendStoreDriver: backendStoreDriver:
enum: enum:
- longhorn - v1
- spdk - v2
type: string type: string
backingImage: backingImage:
type: string type: string
@ -3314,8 +3314,8 @@ spec:
type: string type: string
backendStoreDriver: backendStoreDriver:
enum: enum:
- longhorn - v1
- spdk - v2
type: string type: string
backingImage: backingImage:
type: string type: string
@ -3366,6 +3366,13 @@ spec:
type: array type: array
numberOfReplicas: numberOfReplicas:
type: integer type: integer
offlineReplicaRebuilding:
description: OfflineReplicaRebuilding is used to determine if the offline replica rebuilding feature is enabled or not
enum:
- ignored
- disabled
- enabled
type: string
replicaAutoBalance: replicaAutoBalance:
enum: enum:
- ignored - ignored
@ -3503,6 +3510,8 @@ spec:
type: string type: string
lastDegradedAt: lastDegradedAt:
type: string type: string
offlineReplicaRebuildingRequired:
type: boolean
ownerID: ownerID:
type: string type: string
pendingNodeID: pendingNodeID:

View File

@ -15,6 +15,7 @@ data:
{{ if not (kindIs "invalid" .Values.defaultSettings.replicaAutoBalance) }}replica-auto-balance: {{ .Values.defaultSettings.replicaAutoBalance }}{{ end }} {{ if not (kindIs "invalid" .Values.defaultSettings.replicaAutoBalance) }}replica-auto-balance: {{ .Values.defaultSettings.replicaAutoBalance }}{{ end }}
{{ if not (kindIs "invalid" .Values.defaultSettings.storageOverProvisioningPercentage) }}storage-over-provisioning-percentage: {{ .Values.defaultSettings.storageOverProvisioningPercentage }}{{ end }} {{ if not (kindIs "invalid" .Values.defaultSettings.storageOverProvisioningPercentage) }}storage-over-provisioning-percentage: {{ .Values.defaultSettings.storageOverProvisioningPercentage }}{{ end }}
{{ if not (kindIs "invalid" .Values.defaultSettings.storageMinimalAvailablePercentage) }}storage-minimal-available-percentage: {{ .Values.defaultSettings.storageMinimalAvailablePercentage }}{{ end }} {{ if not (kindIs "invalid" .Values.defaultSettings.storageMinimalAvailablePercentage) }}storage-minimal-available-percentage: {{ .Values.defaultSettings.storageMinimalAvailablePercentage }}{{ end }}
{{ if not (kindIs "invalid" .Values.defaultSettings.storageReservedPercentageForDefaultDisk) }}storage-reserved-percentage-for-default-disk: {{ .Values.defaultSettings.storageReservedPercentageForDefaultDisk }}{{ end }}
{{ if not (kindIs "invalid" .Values.defaultSettings.upgradeChecker) }}upgrade-checker: {{ .Values.defaultSettings.upgradeChecker }}{{ end }} {{ if not (kindIs "invalid" .Values.defaultSettings.upgradeChecker) }}upgrade-checker: {{ .Values.defaultSettings.upgradeChecker }}{{ end }}
{{ if not (kindIs "invalid" .Values.defaultSettings.defaultReplicaCount) }}default-replica-count: {{ .Values.defaultSettings.defaultReplicaCount }}{{ end }} {{ if not (kindIs "invalid" .Values.defaultSettings.defaultReplicaCount) }}default-replica-count: {{ .Values.defaultSettings.defaultReplicaCount }}{{ end }}
{{ if not (kindIs "invalid" .Values.defaultSettings.defaultDataLocality) }}default-data-locality: {{ .Values.defaultSettings.defaultDataLocality }}{{ end }} {{ if not (kindIs "invalid" .Values.defaultSettings.defaultDataLocality) }}default-data-locality: {{ .Values.defaultSettings.defaultDataLocality }}{{ end }}
@ -62,8 +63,7 @@ data:
{{ if not (kindIs "invalid" .Values.defaultSettings.concurrentAutomaticEngineUpgradePerNodeLimit) }}concurrent-automatic-engine-upgrade-per-node-limit: {{ .Values.defaultSettings.concurrentAutomaticEngineUpgradePerNodeLimit }}{{ end }} {{ if not (kindIs "invalid" .Values.defaultSettings.concurrentAutomaticEngineUpgradePerNodeLimit) }}concurrent-automatic-engine-upgrade-per-node-limit: {{ .Values.defaultSettings.concurrentAutomaticEngineUpgradePerNodeLimit }}{{ end }}
{{ if not (kindIs "invalid" .Values.defaultSettings.backingImageCleanupWaitInterval) }}backing-image-cleanup-wait-interval: {{ .Values.defaultSettings.backingImageCleanupWaitInterval }}{{ end }} {{ if not (kindIs "invalid" .Values.defaultSettings.backingImageCleanupWaitInterval) }}backing-image-cleanup-wait-interval: {{ .Values.defaultSettings.backingImageCleanupWaitInterval }}{{ end }}
{{ if not (kindIs "invalid" .Values.defaultSettings.backingImageRecoveryWaitInterval) }}backing-image-recovery-wait-interval: {{ .Values.defaultSettings.backingImageRecoveryWaitInterval }}{{ end }} {{ if not (kindIs "invalid" .Values.defaultSettings.backingImageRecoveryWaitInterval) }}backing-image-recovery-wait-interval: {{ .Values.defaultSettings.backingImageRecoveryWaitInterval }}{{ end }}
{{ if not (kindIs "invalid" .Values.defaultSettings.guaranteedEngineManagerCPU) }}guaranteed-engine-manager-cpu: {{ .Values.defaultSettings.guaranteedEngineManagerCPU }}{{ end }} {{ if not (kindIs "invalid" .Values.defaultSettings.guaranteedInstanceManagerCPU) }}guaranteed-instance-manager-cpu: {{ .Values.defaultSettings.guaranteedInstanceManagerCPU }}{{ end }}
{{ if not (kindIs "invalid" .Values.defaultSettings.guaranteedReplicaManagerCPU) }}guaranteed-replica-manager-cpu: {{ .Values.defaultSettings.guaranteedReplicaManagerCPU }}{{ end }}
{{ if not (kindIs "invalid" .Values.defaultSettings.kubernetesClusterAutoscalerEnabled) }}kubernetes-cluster-autoscaler-enabled: {{ .Values.defaultSettings.kubernetesClusterAutoscalerEnabled }}{{ end }} {{ if not (kindIs "invalid" .Values.defaultSettings.kubernetesClusterAutoscalerEnabled) }}kubernetes-cluster-autoscaler-enabled: {{ .Values.defaultSettings.kubernetesClusterAutoscalerEnabled }}{{ end }}
{{ if not (kindIs "invalid" .Values.defaultSettings.orphanAutoDeletion) }}orphan-auto-deletion: {{ .Values.defaultSettings.orphanAutoDeletion }}{{ end }} {{ if not (kindIs "invalid" .Values.defaultSettings.orphanAutoDeletion) }}orphan-auto-deletion: {{ .Values.defaultSettings.orphanAutoDeletion }}{{ end }}
{{ if not (kindIs "invalid" .Values.defaultSettings.storageNetwork) }}storage-network: {{ .Values.defaultSettings.storageNetwork }}{{ end }} {{ if not (kindIs "invalid" .Values.defaultSettings.storageNetwork) }}storage-network: {{ .Values.defaultSettings.storageNetwork }}{{ end }}
@ -79,4 +79,5 @@ data:
{{ if not (kindIs "invalid" .Values.defaultSettings.backupCompressionMethod) }}backup-compression-method: {{ .Values.defaultSettings.backupCompressionMethod }}{{ end }} {{ if not (kindIs "invalid" .Values.defaultSettings.backupCompressionMethod) }}backup-compression-method: {{ .Values.defaultSettings.backupCompressionMethod }}{{ end }}
{{ if not (kindIs "invalid" .Values.defaultSettings.backupConcurrentLimit) }}backup-concurrent-limit: {{ .Values.defaultSettings.backupConcurrentLimit }}{{ end }} {{ if not (kindIs "invalid" .Values.defaultSettings.backupConcurrentLimit) }}backup-concurrent-limit: {{ .Values.defaultSettings.backupConcurrentLimit }}{{ end }}
{{ if not (kindIs "invalid" .Values.defaultSettings.restoreConcurrentLimit) }}restore-concurrent-limit: {{ .Values.defaultSettings.restoreConcurrentLimit }}{{ end }} {{ if not (kindIs "invalid" .Values.defaultSettings.restoreConcurrentLimit) }}restore-concurrent-limit: {{ .Values.defaultSettings.restoreConcurrentLimit }}{{ end }}
{{ if not (kindIs "invalid" .Values.defaultSettings.spdk) }}spdk: {{ .Values.defaultSettings.spdk }}{{ end }} {{ if not (kindIs "invalid" .Values.defaultSettings.v2DataEngine) }}v2-data-engine: {{ .Values.defaultSettings.v2DataEngine }}{{ end }}
{{ if not (kindIs "invalid" .Values.defaultSettings.offlineReplicaRebuilding) }}offline-replica-rebuilding: {{ .Values.defaultSettings.offlineReplicaRebuilding }}{{ end }}

View File

@ -1,3 +1,4 @@
{{- if .Values.helmPreUpgradeCheckerJob.enabled }}
apiVersion: batch/v1 apiVersion: batch/v1
kind: Job kind: Job
metadata: metadata:
@ -54,3 +55,4 @@ spec:
{{ toYaml .Values.longhornManager.nodeSelector | indent 8 }} {{ toYaml .Values.longhornManager.nodeSelector | indent 8 }}
{{- end }} {{- end }}
{{- end }} {{- end }}
{{- end }}

View File

@ -30,25 +30,25 @@ image:
longhorn: longhorn:
engine: engine:
repository: longhornio/longhorn-engine repository: longhornio/longhorn-engine
tag: master-head tag: v1.5.2-rc3
manager: manager:
repository: longhornio/longhorn-manager repository: longhornio/longhorn-manager
tag: master-head tag: v1.5.2-rc3
ui: ui:
repository: longhornio/longhorn-ui repository: longhornio/longhorn-ui
tag: master-head tag: v1.5.2-rc3
instanceManager: instanceManager:
repository: longhornio/longhorn-instance-manager repository: longhornio/longhorn-instance-manager
tag: master-head tag: v1.5.2-rc3
shareManager: shareManager:
repository: longhornio/longhorn-share-manager repository: longhornio/longhorn-share-manager
tag: master-head tag: v1.5.2-rc3
backingImageManager: backingImageManager:
repository: longhornio/backing-image-manager repository: longhornio/backing-image-manager
tag: master-head tag: v1.5.2-rc3
supportBundleKit: supportBundleKit:
repository: longhornio/support-bundle-kit repository: longhornio/support-bundle-kit
tag: v0.0.24 tag: v0.0.27
csi: csi:
attacher: attacher:
repository: longhornio/csi-attacher repository: longhornio/csi-attacher
@ -102,6 +102,9 @@ persistence:
selector: "" selector: ""
removeSnapshotsDuringFilesystemTrim: ignored # "enabled" or "disabled" otherwise removeSnapshotsDuringFilesystemTrim: ignored # "enabled" or "disabled" otherwise
helmPreUpgradeCheckerJob:
enabled: true
csi: csi:
kubeletRootDir: ~ kubeletRootDir: ~
attacherReplicaCount: ~ attacherReplicaCount: ~
@ -120,6 +123,7 @@ defaultSettings:
replicaAutoBalance: ~ replicaAutoBalance: ~
storageOverProvisioningPercentage: ~ storageOverProvisioningPercentage: ~
storageMinimalAvailablePercentage: ~ storageMinimalAvailablePercentage: ~
storageReservedPercentageForDefaultDisk: ~
upgradeChecker: ~ upgradeChecker: ~
defaultReplicaCount: ~ defaultReplicaCount: ~
defaultLonghornStaticStorageClass: ~ defaultLonghornStaticStorageClass: ~
@ -137,7 +141,7 @@ defaultSettings:
disableSchedulingOnCordonedNode: ~ disableSchedulingOnCordonedNode: ~
replicaZoneSoftAntiAffinity: ~ replicaZoneSoftAntiAffinity: ~
nodeDownPodDeletionPolicy: ~ nodeDownPodDeletionPolicy: ~
nodeDrainPolicy : ~ nodeDrainPolicy: ~
replicaReplenishmentWaitInterval: ~ replicaReplenishmentWaitInterval: ~
concurrentReplicaRebuildPerNodeLimit: ~ concurrentReplicaRebuildPerNodeLimit: ~
concurrentVolumeBackupRestorePerNodeLimit: ~ concurrentVolumeBackupRestorePerNodeLimit: ~
@ -148,8 +152,7 @@ defaultSettings:
concurrentAutomaticEngineUpgradePerNodeLimit: ~ concurrentAutomaticEngineUpgradePerNodeLimit: ~
backingImageCleanupWaitInterval: ~ backingImageCleanupWaitInterval: ~
backingImageRecoveryWaitInterval: ~ backingImageRecoveryWaitInterval: ~
guaranteedEngineManagerCPU: ~ guaranteedInstanceManagerCPU: ~
guaranteedReplicaManagerCPU: ~
kubernetesClusterAutoscalerEnabled: ~ kubernetesClusterAutoscalerEnabled: ~
orphanAutoDeletion: ~ orphanAutoDeletion: ~
storageNetwork: ~ storageNetwork: ~
@ -165,7 +168,8 @@ defaultSettings:
backupCompressionMethod: ~ backupCompressionMethod: ~
backupConcurrentLimit: ~ backupConcurrentLimit: ~
restoreConcurrentLimit: ~ restoreConcurrentLimit: ~
spdk: ~ v2DataEngine: ~
offlineReplicaRebuilding: ~
privateRegistry: privateRegistry:
createSecret: ~ createSecret: ~
registryUrl: ~ registryUrl: ~
@ -227,54 +231,6 @@ longhornUI:
# label-key1: "label-value1" # label-key1: "label-value1"
# label-key2: "label-value2" # label-key2: "label-value2"
longhornConversionWebhook:
replicas: 2
priorityClass: ~
tolerations: []
## If you want to set tolerations for Longhorn conversion webhook Deployment, delete the `[]` in the line above
## and uncomment this example block
# - key: "key"
# operator: "Equal"
# value: "value"
# effect: "NoSchedule"
nodeSelector: {}
## If you want to set node selector for Longhorn conversion webhook Deployment, delete the `{}` in the line above
## and uncomment this example block
# label-key1: "label-value1"
# label-key2: "label-value2"
longhornAdmissionWebhook:
replicas: 2
priorityClass: ~
tolerations: []
## If you want to set tolerations for Longhorn admission webhook Deployment, delete the `[]` in the line above
## and uncomment this example block
# - key: "key"
# operator: "Equal"
# value: "value"
# effect: "NoSchedule"
nodeSelector: {}
## If you want to set node selector for Longhorn admission webhook Deployment, delete the `{}` in the line above
## and uncomment this example block
# label-key1: "label-value1"
# label-key2: "label-value2"
longhornRecoveryBackend:
replicas: 2
priorityClass: ~
tolerations: []
## If you want to set tolerations for Longhorn recovery backend Deployment, delete the `[]` in the line above
## and uncomment this example block
# - key: "key"
# operator: "Equal"
# value: "value"
# effect: "NoSchedule"
nodeSelector: {}
## If you want to set node selector for Longhorn recovery backend Deployment, delete the `{}` in the line above
## and uncomment this example block
# label-key1: "label-value1"
# label-key2: "label-value2"
ingress: ingress:
## Set to true to enable ingress record generation ## Set to true to enable ingress record generation
enabled: false enabled: false

View File

@ -6,7 +6,6 @@ metadata:
labels: labels:
app: longhorn-test-nfs app: longhorn-test-nfs
spec: spec:
spec:
selector: selector:
matchLabels: matchLabels:
app: longhorn-test-nfs app: longhorn-test-nfs

View File

@ -4,10 +4,10 @@ longhornio/csi-resizer:v1.7.0
longhornio/csi-snapshotter:v6.2.1 longhornio/csi-snapshotter:v6.2.1
longhornio/csi-node-driver-registrar:v2.7.0 longhornio/csi-node-driver-registrar:v2.7.0
longhornio/livenessprobe:v2.9.0 longhornio/livenessprobe:v2.9.0
longhornio/backing-image-manager:master-head longhornio/backing-image-manager:v1.5.2-rc3
longhornio/longhorn-engine:master-head longhornio/longhorn-engine:v1.5.2-rc3
longhornio/longhorn-instance-manager:master-head longhornio/longhorn-instance-manager:v1.5.2-rc3
longhornio/longhorn-manager:master-head longhornio/longhorn-manager:v1.5.2-rc3
longhornio/longhorn-share-manager:master-head longhornio/longhorn-share-manager:v1.5.2-rc3
longhornio/longhorn-ui:master-head longhornio/longhorn-ui:v1.5.2-rc3
longhornio/support-bundle-kit:v0.0.24 longhornio/support-bundle-kit:v0.0.27

View File

@ -14,7 +14,7 @@ metadata:
labels: labels:
app.kubernetes.io/name: longhorn app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev app.kubernetes.io/version: v1.5.2-rc3
--- ---
# Source: longhorn/templates/serviceaccount.yaml # Source: longhorn/templates/serviceaccount.yaml
apiVersion: v1 apiVersion: v1
@ -25,7 +25,7 @@ metadata:
labels: labels:
app.kubernetes.io/name: longhorn app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev app.kubernetes.io/version: v1.5.2-rc3
--- ---
# Source: longhorn/templates/default-setting.yaml # Source: longhorn/templates/default-setting.yaml
apiVersion: v1 apiVersion: v1
@ -36,7 +36,7 @@ metadata:
labels: labels:
app.kubernetes.io/name: longhorn app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev app.kubernetes.io/version: v1.5.2-rc3
data: data:
default-setting.yaml: |- default-setting.yaml: |-
--- ---
@ -49,7 +49,7 @@ metadata:
labels: labels:
app.kubernetes.io/name: longhorn app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev app.kubernetes.io/version: v1.5.2-rc3
data: data:
storageclass.yaml: | storageclass.yaml: |
kind: StorageClass kind: StorageClass
@ -79,7 +79,7 @@ metadata:
labels: labels:
app.kubernetes.io/name: longhorn app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev app.kubernetes.io/version: v1.5.2-rc3
longhorn-manager: "" longhorn-manager: ""
name: backingimagedatasources.longhorn.io name: backingimagedatasources.longhorn.io
spec: spec:
@ -250,7 +250,7 @@ metadata:
labels: labels:
app.kubernetes.io/name: longhorn app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev app.kubernetes.io/version: v1.5.2-rc3
longhorn-manager: "" longhorn-manager: ""
name: backingimagemanagers.longhorn.io name: backingimagemanagers.longhorn.io
spec: spec:
@ -426,7 +426,7 @@ metadata:
labels: labels:
app.kubernetes.io/name: longhorn app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev app.kubernetes.io/version: v1.5.2-rc3
longhorn-manager: "" longhorn-manager: ""
name: backingimages.longhorn.io name: backingimages.longhorn.io
spec: spec:
@ -585,7 +585,7 @@ metadata:
labels: labels:
app.kubernetes.io/name: longhorn app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev app.kubernetes.io/version: v1.5.2-rc3
longhorn-manager: "" longhorn-manager: ""
name: backups.longhorn.io name: backups.longhorn.io
spec: spec:
@ -781,7 +781,7 @@ metadata:
labels: labels:
app.kubernetes.io/name: longhorn app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev app.kubernetes.io/version: v1.5.2-rc3
longhorn-manager: "" longhorn-manager: ""
name: backuptargets.longhorn.io name: backuptargets.longhorn.io
spec: spec:
@ -964,7 +964,7 @@ metadata:
labels: labels:
app.kubernetes.io/name: longhorn app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev app.kubernetes.io/version: v1.5.2-rc3
longhorn-manager: "" longhorn-manager: ""
name: backupvolumes.longhorn.io name: backupvolumes.longhorn.io
spec: spec:
@ -1131,7 +1131,7 @@ metadata:
labels: labels:
app.kubernetes.io/name: longhorn app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev app.kubernetes.io/version: v1.5.2-rc3
longhorn-manager: "" longhorn-manager: ""
name: engineimages.longhorn.io name: engineimages.longhorn.io
spec: spec:
@ -1323,7 +1323,7 @@ metadata:
labels: labels:
app.kubernetes.io/name: longhorn app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev app.kubernetes.io/version: v1.5.2-rc3
longhorn-manager: "" longhorn-manager: ""
name: engines.longhorn.io name: engines.longhorn.io
spec: spec:
@ -1419,8 +1419,8 @@ spec:
type: boolean type: boolean
backendStoreDriver: backendStoreDriver:
enum: enum:
- longhorn - v1
- spdk - v2
type: string type: string
backupVolume: backupVolume:
type: string type: string
@ -1678,7 +1678,7 @@ metadata:
labels: labels:
app.kubernetes.io/name: longhorn app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev app.kubernetes.io/version: v1.5.2-rc3
longhorn-manager: "" longhorn-manager: ""
name: instancemanagers.longhorn.io name: instancemanagers.longhorn.io
spec: spec:
@ -1919,7 +1919,7 @@ metadata:
labels: labels:
app.kubernetes.io/name: longhorn app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev app.kubernetes.io/version: v1.5.2-rc3
longhorn-manager: "" longhorn-manager: ""
name: nodes.longhorn.io name: nodes.longhorn.io
spec: spec:
@ -2163,7 +2163,7 @@ metadata:
labels: labels:
app.kubernetes.io/name: longhorn app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev app.kubernetes.io/version: v1.5.2-rc3
longhorn-manager: "" longhorn-manager: ""
name: orphans.longhorn.io name: orphans.longhorn.io
spec: spec:
@ -2434,7 +2434,7 @@ metadata:
labels: labels:
app.kubernetes.io/name: longhorn app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev app.kubernetes.io/version: v1.5.2-rc3
longhorn-manager: "" longhorn-manager: ""
name: replicas.longhorn.io name: replicas.longhorn.io
spec: spec:
@ -2538,8 +2538,8 @@ spec:
type: boolean type: boolean
backendStoreDriver: backendStoreDriver:
enum: enum:
- longhorn - v1
- spdk - v2
type: string type: string
backingImage: backingImage:
type: string type: string
@ -2651,7 +2651,7 @@ metadata:
labels: labels:
app.kubernetes.io/name: longhorn app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev app.kubernetes.io/version: v1.5.2-rc3
longhorn-manager: "" longhorn-manager: ""
name: settings.longhorn.io name: settings.longhorn.io
spec: spec:
@ -2742,7 +2742,7 @@ metadata:
labels: labels:
app.kubernetes.io/name: longhorn app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev app.kubernetes.io/version: v1.5.2-rc3
longhorn-manager: "" longhorn-manager: ""
name: sharemanagers.longhorn.io name: sharemanagers.longhorn.io
spec: spec:
@ -2857,7 +2857,7 @@ metadata:
labels: labels:
app.kubernetes.io/name: longhorn app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev app.kubernetes.io/version: v1.5.2-rc3
longhorn-manager: "" longhorn-manager: ""
name: snapshots.longhorn.io name: snapshots.longhorn.io
spec: spec:
@ -2984,7 +2984,7 @@ metadata:
labels: labels:
app.kubernetes.io/name: longhorn app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev app.kubernetes.io/version: v1.5.2-rc3
longhorn-manager: "" longhorn-manager: ""
name: supportbundles.longhorn.io name: supportbundles.longhorn.io
spec: spec:
@ -3110,7 +3110,7 @@ metadata:
labels: labels:
app.kubernetes.io/name: longhorn app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev app.kubernetes.io/version: v1.5.2-rc3
longhorn-manager: "" longhorn-manager: ""
name: systembackups.longhorn.io name: systembackups.longhorn.io
spec: spec:
@ -3238,7 +3238,7 @@ metadata:
labels: labels:
app.kubernetes.io/name: longhorn app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev app.kubernetes.io/version: v1.5.2-rc3
longhorn-manager: "" longhorn-manager: ""
name: systemrestores.longhorn.io name: systemrestores.longhorn.io
spec: spec:
@ -3340,7 +3340,7 @@ metadata:
labels: labels:
app.kubernetes.io/name: longhorn app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev app.kubernetes.io/version: v1.5.2-rc3
longhorn-manager: "" longhorn-manager: ""
name: volumes.longhorn.io name: volumes.longhorn.io
spec: spec:
@ -3462,8 +3462,8 @@ spec:
type: string type: string
backendStoreDriver: backendStoreDriver:
enum: enum:
- longhorn - v1
- spdk - v2
type: string type: string
backingImage: backingImage:
type: string type: string
@ -3514,6 +3514,13 @@ spec:
type: array type: array
numberOfReplicas: numberOfReplicas:
type: integer type: integer
offlineReplicaRebuilding:
description: OfflineReplicaRebuilding is used to determine if the offline replica rebuilding feature is enabled or not
enum:
- ignored
- disabled
- enabled
type: string
replicaAutoBalance: replicaAutoBalance:
enum: enum:
- ignored - ignored
@ -3651,6 +3658,8 @@ spec:
type: string type: string
lastDegradedAt: lastDegradedAt:
type: string type: string
offlineReplicaRebuildingRequired:
type: boolean
ownerID: ownerID:
type: string type: string
pendingNodeID: pendingNodeID:
@ -3693,7 +3702,7 @@ metadata:
labels: labels:
app.kubernetes.io/name: longhorn app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev app.kubernetes.io/version: v1.5.2-rc3
longhorn-manager: "" longhorn-manager: ""
name: volumeattachments.longhorn.io name: volumeattachments.longhorn.io
spec: spec:
@ -3822,7 +3831,7 @@ metadata:
labels: labels:
app.kubernetes.io/name: longhorn app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev app.kubernetes.io/version: v1.5.2-rc3
rules: rules:
- apiGroups: - apiGroups:
- apiextensions.k8s.io - apiextensions.k8s.io
@ -3888,7 +3897,7 @@ metadata:
labels: labels:
app.kubernetes.io/name: longhorn app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev app.kubernetes.io/version: v1.5.2-rc3
roleRef: roleRef:
apiGroup: rbac.authorization.k8s.io apiGroup: rbac.authorization.k8s.io
kind: ClusterRole kind: ClusterRole
@ -3906,7 +3915,7 @@ metadata:
labels: labels:
app.kubernetes.io/name: longhorn app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev app.kubernetes.io/version: v1.5.2-rc3
roleRef: roleRef:
apiGroup: rbac.authorization.k8s.io apiGroup: rbac.authorization.k8s.io
kind: ClusterRole kind: ClusterRole
@ -3923,7 +3932,7 @@ metadata:
labels: labels:
app.kubernetes.io/name: longhorn app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev app.kubernetes.io/version: v1.5.2-rc3
app: longhorn-manager app: longhorn-manager
name: longhorn-backend name: longhorn-backend
namespace: longhorn-system namespace: longhorn-system
@ -3944,7 +3953,7 @@ metadata:
labels: labels:
app.kubernetes.io/name: longhorn app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev app.kubernetes.io/version: v1.5.2-rc3
app: longhorn-ui app: longhorn-ui
name: longhorn-frontend name: longhorn-frontend
namespace: longhorn-system namespace: longhorn-system
@ -3965,7 +3974,7 @@ metadata:
labels: labels:
app.kubernetes.io/name: longhorn app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev app.kubernetes.io/version: v1.5.2-rc3
app: longhorn-conversion-webhook app: longhorn-conversion-webhook
name: longhorn-conversion-webhook name: longhorn-conversion-webhook
namespace: longhorn-system namespace: longhorn-system
@ -3986,7 +3995,7 @@ metadata:
labels: labels:
app.kubernetes.io/name: longhorn app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev app.kubernetes.io/version: v1.5.2-rc3
app: longhorn-admission-webhook app: longhorn-admission-webhook
name: longhorn-admission-webhook name: longhorn-admission-webhook
namespace: longhorn-system namespace: longhorn-system
@ -4007,7 +4016,7 @@ metadata:
labels: labels:
app.kubernetes.io/name: longhorn app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev app.kubernetes.io/version: v1.5.2-rc3
app: longhorn-recovery-backend app: longhorn-recovery-backend
name: longhorn-recovery-backend name: longhorn-recovery-backend
namespace: longhorn-system namespace: longhorn-system
@ -4028,7 +4037,7 @@ metadata:
labels: labels:
app.kubernetes.io/name: longhorn app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev app.kubernetes.io/version: v1.5.2-rc3
name: longhorn-engine-manager name: longhorn-engine-manager
namespace: longhorn-system namespace: longhorn-system
spec: spec:
@ -4044,7 +4053,7 @@ metadata:
labels: labels:
app.kubernetes.io/name: longhorn app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev app.kubernetes.io/version: v1.5.2-rc3
name: longhorn-replica-manager name: longhorn-replica-manager
namespace: longhorn-system namespace: longhorn-system
spec: spec:
@ -4060,7 +4069,7 @@ metadata:
labels: labels:
app.kubernetes.io/name: longhorn app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev app.kubernetes.io/version: v1.5.2-rc3
app: longhorn-manager app: longhorn-manager
name: longhorn-manager name: longhorn-manager
namespace: longhorn-system namespace: longhorn-system
@ -4073,12 +4082,12 @@ spec:
labels: labels:
app.kubernetes.io/name: longhorn app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev app.kubernetes.io/version: v1.5.2-rc3
app: longhorn-manager app: longhorn-manager
spec: spec:
containers: containers:
- name: longhorn-manager - name: longhorn-manager
image: longhornio/longhorn-manager:master-head image: longhornio/longhorn-manager:v1.5.2-rc3
imagePullPolicy: IfNotPresent imagePullPolicy: IfNotPresent
securityContext: securityContext:
privileged: true privileged: true
@ -4087,17 +4096,17 @@ spec:
- -d - -d
- daemon - daemon
- --engine-image - --engine-image
- "longhornio/longhorn-engine:master-head" - "longhornio/longhorn-engine:v1.5.2-rc3"
- --instance-manager-image - --instance-manager-image
- "longhornio/longhorn-instance-manager:master-head" - "longhornio/longhorn-instance-manager:v1.5.2-rc3"
- --share-manager-image - --share-manager-image
- "longhornio/longhorn-share-manager:master-head" - "longhornio/longhorn-share-manager:v1.5.2-rc3"
- --backing-image-manager-image - --backing-image-manager-image
- "longhornio/backing-image-manager:master-head" - "longhornio/backing-image-manager:v1.5.2-rc3"
- --support-bundle-manager-image - --support-bundle-manager-image
- "longhornio/support-bundle-kit:v0.0.24" - "longhornio/support-bundle-kit:v0.0.27"
- --manager-image - --manager-image
- "longhornio/longhorn-manager:master-head" - "longhornio/longhorn-manager:v1.5.2-rc3"
- --service-account - --service-account
- longhorn-service-account - longhorn-service-account
ports: ports:
@ -4165,7 +4174,7 @@ metadata:
labels: labels:
app.kubernetes.io/name: longhorn app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev app.kubernetes.io/version: v1.5.2-rc3
spec: spec:
replicas: 1 replicas: 1
selector: selector:
@ -4176,23 +4185,23 @@ spec:
labels: labels:
app.kubernetes.io/name: longhorn app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev app.kubernetes.io/version: v1.5.2-rc3
app: longhorn-driver-deployer app: longhorn-driver-deployer
spec: spec:
initContainers: initContainers:
- name: wait-longhorn-manager - name: wait-longhorn-manager
image: longhornio/longhorn-manager:master-head image: longhornio/longhorn-manager:v1.5.2-rc3
command: ['sh', '-c', 'while [ $(curl -m 1 -s -o /dev/null -w "%{http_code}" http://longhorn-backend:9500/v1) != "200" ]; do echo waiting; sleep 2; done'] command: ['sh', '-c', 'while [ $(curl -m 1 -s -o /dev/null -w "%{http_code}" http://longhorn-backend:9500/v1) != "200" ]; do echo waiting; sleep 2; done']
containers: containers:
- name: longhorn-driver-deployer - name: longhorn-driver-deployer
image: longhornio/longhorn-manager:master-head image: longhornio/longhorn-manager:v1.5.2-rc3
imagePullPolicy: IfNotPresent imagePullPolicy: IfNotPresent
command: command:
- longhorn-manager - longhorn-manager
- -d - -d
- deploy-driver - deploy-driver
- --manager-image - --manager-image
- "longhornio/longhorn-manager:master-head" - "longhornio/longhorn-manager:v1.5.2-rc3"
- --manager-url - --manager-url
- http://longhorn-backend:9500/v1 - http://longhorn-backend:9500/v1
env: env:
@ -4231,7 +4240,7 @@ metadata:
labels: labels:
app.kubernetes.io/name: longhorn app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev app.kubernetes.io/version: v1.5.2-rc3
app: longhorn-ui app: longhorn-ui
name: longhorn-ui name: longhorn-ui
namespace: longhorn-system namespace: longhorn-system
@ -4245,7 +4254,7 @@ spec:
labels: labels:
app.kubernetes.io/name: longhorn app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev app.kubernetes.io/version: v1.5.2-rc3
app: longhorn-ui app: longhorn-ui
spec: spec:
affinity: affinity:
@ -4262,7 +4271,7 @@ spec:
topologyKey: kubernetes.io/hostname topologyKey: kubernetes.io/hostname
containers: containers:
- name: longhorn-ui - name: longhorn-ui
image: longhornio/longhorn-ui:master-head image: longhornio/longhorn-ui:v1.5.2-rc3
imagePullPolicy: IfNotPresent imagePullPolicy: IfNotPresent
volumeMounts: volumeMounts:
- name : nginx-cache - name : nginx-cache

View File

@ -0,0 +1,35 @@
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: longhorn-iscsi-selinux-workaround
labels:
app: longhorn-iscsi-selinux-workaround
annotations:
command: &cmd if ! rpm -q policycoreutils > /dev/null 2>&1; then echo "failed to apply workaround; only applicable in Fedora based distros with SELinux enabled"; exit; elif cd /tmp && echo '(allow iscsid_t self (capability (dac_override)))' > local_longhorn.cil && semodule -vi local_longhorn.cil && rm -f local_longhorn.cil; then echo "applied workaround successfully"; else echo "failed to apply workaround; error code $?"; fi
spec:
selector:
matchLabels:
app: longhorn-iscsi-selinux-workaround
template:
metadata:
labels:
app: longhorn-iscsi-selinux-workaround
spec:
hostPID: true
initContainers:
- name: iscsi-selinux-workaround
command:
- nsenter
- --mount=/proc/1/ns/mnt
- --
- bash
- -c
- *cmd
image: alpine:3.17
securityContext:
privileged: true
containers:
- name: sleep
image: registry.k8s.io/pause:3.1
updateStrategy:
type: RollingUpdate

View File

@ -5,7 +5,7 @@ metadata:
labels: labels:
app: longhorn-spdk-setup app: longhorn-spdk-setup
annotations: annotations:
command: &cmd rm -rf ${SPDK_DIR}; git clone -b longhorn https://github.com/longhorn/spdk.git ${SPDK_DIR} && bash ${SPDK_DIR}/scripts/setup.sh ${SPDK_OPTION}; if [ $? -eq 0 ]; then echo "vm.nr_hugepages=$((HUGEMEM/2))" >> /etc/sysctl.conf; echo "SPDK environment is configured successfully"; else echo "Failed to configure SPDK environment error code $?"; fi; rm -rf ${SPDK_DIR} command: &cmd OS=$(grep -E "^ID_LIKE=" /etc/os-release | cut -d '=' -f 2); if [[ -z "${OS}" ]]; then OS=$(grep -E "^ID=" /etc/os-release | cut -d '=' -f 2); fi; if [[ "${OS}" == *"debian"* ]]; then sudo apt-get update -q -y && sudo apt-get install -q -y git; elif [[ "${OS}" == *"suse"* ]]; then sudo zypper --gpg-auto-import-keys -q refresh && sudo zypper --gpg-auto-import-keys -q install -y git; else sudo yum makecache -q -y && sudo yum --setopt=tsflags=noscripts install -q -y git; fi && if [ $? -eq 0 ]; then echo "git install successfully"; else echo "git install failed error code $?"; fi && rm -rf ${SPDK_DIR}; git clone -b longhorn https://github.com/longhorn/spdk.git ${SPDK_DIR} && bash ${SPDK_DIR}/scripts/setup.sh ${SPDK_OPTION}; if [ $? -eq 0 ]; then echo "vm.nr_hugepages=$((HUGEMEM/2))" >> /etc/sysctl.conf; echo "SPDK environment is configured successfully"; else echo "Failed to configure SPDK environment error code $?"; fi; rm -rf ${SPDK_DIR}
spec: spec:
selector: selector:
matchLabels: matchLabels:
@ -33,7 +33,7 @@ spec:
- name: SPDK_OPTION - name: SPDK_OPTION
value: "" value: ""
- name: HUGEMEM - name: HUGEMEM
value: "2048" value: "1024"
- name: PCI_ALLOWED - name: PCI_ALLOWED
value: "none" value: "none"
- name: DRIVER_OVERRIDE - name: DRIVER_OVERRIDE

View File

@ -1,6 +1,7 @@
#!/bin/bash #!/bin/bash
UPGRADE_RESPONDER_REPO="https://github.com/longhorn/upgrade-responder.git" UPGRADE_RESPONDER_REPO="https://github.com/longhorn/upgrade-responder.git"
UPGRADE_RESPONDER_REPO_BRANCH="master"
UPGRADE_RESPONDER_VALUE_YAML="upgrade-responder-value.yaml" UPGRADE_RESPONDER_VALUE_YAML="upgrade-responder-value.yaml"
UPGRADE_RESPONDER_IMAGE_REPO="longhornio/upgrade-responder" UPGRADE_RESPONDER_IMAGE_REPO="longhornio/upgrade-responder"
UPGRADE_RESPONDER_IMAGE_TAG="master-head" UPGRADE_RESPONDER_IMAGE_TAG="master-head"
@ -59,12 +60,331 @@ secret:
influxDBUrl: "${INFLUXDB_URL}" influxDBUrl: "${INFLUXDB_URL}"
influxDBUser: "root" influxDBUser: "root"
influxDBPassword: "root" influxDBPassword: "root"
configMap:
responseConfig: |-
{
"versions": [{
"name": "v1.0.0",
"releaseDate": "2020-05-18T12:30:00Z",
"tags": ["latest"]
}]
}
requestSchema: |-
{
"appVersionSchema": {
"dataType": "string",
"maxLen": 200
},
"extraTagInfoSchema": {
"hostKernelRelease": {
"dataType": "string",
"maxLen": 200
},
"hostOsDistro": {
"dataType": "string",
"maxLen": 200
},
"kubernetesNodeProvider": {
"dataType": "string",
"maxLen": 200
},
"kubernetesVersion": {
"dataType": "string",
"maxLen": 200
},
"longhornSettingAllowRecurringJobWhileVolumeDetached": {
"dataType": "string",
"maxLen": 200
},
"longhornSettingAllowVolumeCreationWithDegradedAvailability": {
"dataType": "string",
"maxLen": 200
},
"longhornSettingAutoCleanupSystemGeneratedSnapshot": {
"dataType": "string",
"maxLen": 200
},
"longhornSettingAutoDeletePodWhenVolumeDetachedUnexpectedly": {
"dataType": "string",
"maxLen": 200
},
"longhornSettingAutoSalvage": {
"dataType": "string",
"maxLen": 200
},
"longhornSettingBackupCompressionMethod": {
"dataType": "string",
"maxLen": 200
},
"longhornSettingBackupTarget": {
"dataType": "string",
"maxLen": 200
},
"longhornSettingCrdApiVersion": {
"dataType": "string",
"maxLen": 200
},
"longhornSettingCreateDefaultDiskLabeledNodes": {
"dataType": "string",
"maxLen": 200
},
"longhornSettingDefaultDataLocality": {
"dataType": "string",
"maxLen": 200
},
"longhornSettingDisableRevisionCounter": {
"dataType": "string",
"maxLen": 200
},
"longhornSettingDisableSchedulingOnCordonedNode": {
"dataType": "string",
"maxLen": 200
},
"longhornSettingFastReplicaRebuildEnabled": {
"dataType": "string",
"maxLen": 200
},
"longhornSettingKubernetesClusterAutoscalerEnabled": {
"dataType": "string",
"maxLen": 200
},
"longhornSettingNodeDownPodDeletionPolicy": {
"dataType": "string",
"maxLen": 200
},
"longhornSettingNodeDrainPolicy": {
"dataType": "string",
"maxLen": 200
},
"longhornSettingOfflineReplicaRebuilding": {
"dataType": "string",
"maxLen": 200
},
"longhornSettingOrphanAutoDeletion": {
"dataType": "string",
"maxLen": 200
},
"longhornSettingPriorityClass": {
"dataType": "string",
"maxLen": 200
},
"longhornSettingRegistrySecret": {
"dataType": "string",
"maxLen": 200
},
"longhornSettingRemoveSnapshotsDuringFilesystemTrim": {
"dataType": "string",
"maxLen": 200
},
"longhornSettingReplicaAutoBalance": {
"dataType": "string",
"maxLen": 200
},
"longhornSettingReplicaSoftAntiAffinity": {
"dataType": "string",
"maxLen": 200
},
"longhornSettingReplicaZoneSoftAntiAffinity": {
"dataType": "string",
"maxLen": 200
},
"longhornSettingRestoreVolumeRecurringJobs": {
"dataType": "string",
"maxLen": 200
},
"longhornSettingSnapshotDataIntegrity": {
"dataType": "string",
"maxLen": 200
},
"longhornSettingSnapshotDataIntegrityCronjob": {
"dataType": "string",
"maxLen": 200
},
"longhornSettingSnapshotDataIntegrityImmediateCheckAfterSnapshotCreation": {
"dataType": "string",
"maxLen": 200
},
"longhornSettingStorageNetwork": {
"dataType": "string",
"maxLen": 200
},
"longhornSettingSystemManagedComponentsNodeSelector": {
"dataType": "string",
"maxLen": 200
},
"longhornSettingSystemManagedPodsImagePullPolicy": {
"dataType": "string",
"maxLen": 200
},
"longhornSettingTaintToleration": {
"dataType": "string",
"maxLen": 200
},
"longhornSettingV2DataEngine": {
"dataType": "string",
"maxLen": 200
}
},
"extraFieldInfoSchema": {
"longhornInstanceManagerAverageCpuUsageMilliCores": {
"dataType": "float"
},
"longhornInstanceManagerAverageMemoryUsageBytes": {
"dataType": "float"
},
"longhornManagerAverageCpuUsageMilliCores": {
"dataType": "float"
},
"longhornManagerAverageMemoryUsageBytes": {
"dataType": "float"
},
"longhornNamespaceUid": {
"dataType": "string",
"maxLen": 200
},
"longhornNodeCount": {
"dataType": "float"
},
"longhornNodeDiskHDDCount": {
"dataType": "float"
},
"longhornNodeDiskNVMeCount": {
"dataType": "float"
},
"longhornNodeDiskSSDCount": {
"dataType": "float"
},
"longhornSettingBackingImageCleanupWaitInterval": {
"dataType": "float"
},
"longhornSettingBackingImageRecoveryWaitInterval": {
"dataType": "float"
},
"longhornSettingBackupConcurrentLimit": {
"dataType": "float"
},
"longhornSettingBackupstorePollInterval": {
"dataType": "float"
},
"longhornSettingConcurrentAutomaticEngineUpgradePerNodeLimit": {
"dataType": "float"
},
"longhornSettingConcurrentReplicaRebuildPerNodeLimit": {
"dataType": "float"
},
"longhornSettingConcurrentVolumeBackupRestorePerNodeLimit": {
"dataType": "float"
},
"longhornSettingDefaultReplicaCount": {
"dataType": "float"
},
"longhornSettingEngineReplicaTimeout": {
"dataType": "float"
},
"longhornSettingFailedBackupTtl": {
"dataType": "float"
},
"longhornSettingGuaranteedInstanceManagerCpu": {
"dataType": "float"
},
"longhornSettingRecurringFailedJobsHistoryLimit": {
"dataType": "float"
},
"longhornSettingRecurringSuccessfulJobsHistoryLimit": {
"dataType": "float"
},
"longhornSettingReplicaFileSyncHttpClientTimeout": {
"dataType": "float"
},
"longhornSettingReplicaReplenishmentWaitInterval": {
"dataType": "float"
},
"longhornSettingRestoreConcurrentLimit": {
"dataType": "float"
},
"longhornSettingStorageMinimalAvailablePercentage": {
"dataType": "float"
},
"longhornSettingStorageOverProvisioningPercentage": {
"dataType": "float"
},
"longhornSettingStorageReservedPercentageForDefaultDisk": {
"dataType": "float"
},
"longhornSettingSupportBundleFailedHistoryLimit": {
"dataType": "float"
},
"longhornVolumeAccessModeRwoCount": {
"dataType": "float"
},
"longhornVolumeAccessModeRwxCount": {
"dataType": "float"
},
"longhornVolumeAccessModeUnknownCount": {
"dataType": "float"
},
"longhornVolumeAverageActualSizeBytes": {
"dataType": "float"
},
"longhornVolumeAverageNumberOfReplicas": {
"dataType": "float"
},
"longhornVolumeAverageSizeBytes": {
"dataType": "float"
},
"longhornVolumeAverageSnapshotCount": {
"dataType": "float"
},
"longhornVolumeDataLocalityBestEffortCount": {
"dataType": "float"
},
"longhornVolumeDataLocalityDisabledCount": {
"dataType": "float"
},
"longhornVolumeDataLocalityStrictLocalCount": {
"dataType": "float"
},
"longhornVolumeFrontendBlockdevCount": {
"dataType": "float"
},
"longhornVolumeFrontendIscsiCount": {
"dataType": "float"
},
"longhornVolumeOfflineReplicaRebuildingDisabledCount": {
"dataType": "float"
},
"longhornVolumeOfflineReplicaRebuildingEnabledCount": {
"dataType": "float"
},
"longhornVolumeReplicaAutoBalanceDisabledCount": {
"dataType": "float"
},
"longhornVolumeReplicaSoftAntiAffinityFalseCount": {
"dataType": "float"
},
"longhornVolumeReplicaZoneSoftAntiAffinityTrueCount": {
"dataType": "float"
},
"longhornVolumeRestoreVolumeRecurringJobFalseCount": {
"dataType": "float"
},
"longhornVolumeSnapshotDataIntegrityDisabledCount": {
"dataType": "float"
},
"longhornVolumeSnapshotDataIntegrityFastCheckCount": {
"dataType": "float"
},
"longhornVolumeUnmapMarkSnapChainRemovedFalseCount": {
"dataType": "float"
}
}
}
image: image:
repository: ${UPGRADE_RESPONDER_IMAGE_REPO} repository: ${UPGRADE_RESPONDER_IMAGE_REPO}
tag: ${UPGRADE_RESPONDER_IMAGE_TAG} tag: ${UPGRADE_RESPONDER_IMAGE_TAG}
EOF EOF
git clone ${UPGRADE_RESPONDER_REPO} git clone -b ${UPGRADE_RESPONDER_REPO_BRANCH} ${UPGRADE_RESPONDER_REPO}
helm upgrade --install ${APP_NAME}-upgrade-responder upgrade-responder/chart -f ${UPGRADE_RESPONDER_VALUE_YAML} helm upgrade --install ${APP_NAME}-upgrade-responder upgrade-responder/chart -f ${UPGRADE_RESPONDER_VALUE_YAML}
wait_for_deployment "${APP_NAME}-upgrade-responder" wait_for_deployment "${APP_NAME}-upgrade-responder"
} }

View File

@ -51,7 +51,7 @@ https://github.com/longhorn/longhorn/issues/3546
- Introduce a new gRPC server in Instance Manager. - Introduce a new gRPC server in Instance Manager.
- Keep re-usable connections between Manager and Instance Managers. - Keep reusable connections between Manager and Instance Managers.
- Allow Manager to fall back to engine binary call when communicating with old Instance Manager. - Allow Manager to fall back to engine binary call when communicating with old Instance Manager.

View File

@ -68,7 +68,7 @@ While the node where the share-manager pod is running is down, the share-manager
│ │ │ │
HTTP API ┌─────────────┴──────────────┐ HTTP API ┌─────────────┴──────────────┐
│ │ │ │ │ │
│ │ endpint 1 │ endpoint N │ │ endpoint 1 │ endpoint N
┌──────────────────────┐ │ ┌─────────▼────────┐ ┌────────▼─────────┐ ┌──────────────────────┐ │ ┌─────────▼────────┐ ┌────────▼─────────┐
│ share-manager pod │ │ │ recovery-backend │ │ recovery-backend │ │ share-manager pod │ │ │ recovery-backend │ │ recovery-backend │
│ │ │ │ pod │ │ pod │ │ │ │ │ pod │ │ pod │

View File

@ -30,7 +30,7 @@ Overall, the proposed volume backup policies aim to improve the Longhorn system
1. When volume backup policy is specified: 1. When volume backup policy is specified:
- `if-not-present`: Longhorn will create a backup for volumes that do not have an existing backup. - `if-not-present`: Longhorn will create a backup for volumes that do not have an existing backup.
- `alway`: Longhorn will create a backup for all volumes, regardless of their existing backups. - `always`: Longhorn will create a backup for all volumes, regardless of their existing backups.
- `disabled`: Longhorn will not create any backups for volumes. - `disabled`: Longhorn will not create any backups for volumes.
1. If a volume backup policy is not specified, the policy will be automatically set to `if-not-present`. This ensures that volumes without any existing backups will be backed up during the Longhorn system backup. 1. If a volume backup policy is not specified, the policy will be automatically set to `if-not-present`. This ensures that volumes without any existing backups will be backed up during the Longhorn system backup.

View File

@ -7,7 +7,7 @@ allowVolumeExpansion: true
reclaimPolicy: Delete reclaimPolicy: Delete
volumeBindingMode: Immediate volumeBindingMode: Immediate
parameters: parameters:
numberOfReplicas: "2" numberOfReplicas: "3"
staleReplicaTimeout: "2880" staleReplicaTimeout: "2880"
fromBackup: "" fromBackup: ""
fsType: "ext4" fsType: "ext4"
@ -21,3 +21,4 @@ parameters:
# nodeSelector: "storage,fast" # nodeSelector: "storage,fast"
# recurringJobSelector: '[{"name":"snap-group", "isGroup":true}, # recurringJobSelector: '[{"name":"snap-group", "isGroup":true},
# {"name":"backup", "isGroup":false}]' # {"name":"backup", "isGroup":false}]'
# nfsOptions: "soft,timeo=150,retrans=3"

View File

@ -6,7 +6,7 @@ metadata:
spec: spec:
accessModes: accessModes:
- ReadWriteOnce - ReadWriteOnce
storageClassName: longhorn-spdk storageClassName: longhorn-v2-data-engine
resources: resources:
requests: requests:
storage: 2Gi storage: 2Gi

View File

@ -1,17 +1,17 @@
kind: StorageClass kind: StorageClass
apiVersion: storage.k8s.io/v1 apiVersion: storage.k8s.io/v1
metadata: metadata:
name: longhorn-spdk name: longhorn-v2-data-engine
provisioner: driver.longhorn.io provisioner: driver.longhorn.io
allowVolumeExpansion: true allowVolumeExpansion: true
reclaimPolicy: Delete reclaimPolicy: Delete
volumeBindingMode: Immediate volumeBindingMode: Immediate
parameters: parameters:
# backup, backingImage and snapshot related parameters are not supported in SPDK preview stage # backup, backingImage and snapshot related parameters are not supported in SPDK preview stage
numberOfReplicas: "2" numberOfReplicas: "3"
staleReplicaTimeout: "2880" staleReplicaTimeout: "2880"
fsType: "ext4" fsType: "ext4"
backendStoreDriver: "spdk" backendStoreDriver: "v2"
# mkfsParams: "-I 256 -b 4096 -O ^metadata_csum,^64bit" # mkfsParams: "-I 256 -b 4096 -O ^metadata_csum,^64bit"
# nodeSelector: "storage,fast" # nodeSelector: "storage,fast"
# recurringJobSelector: '[{"name":"snap-group", "isGroup":true}, # recurringJobSelector: '[{"name":"snap-group", "isGroup":true},
@ -23,5 +23,6 @@ parameters:
# backingImageChecksum: "SHA512 checksum of the backing image" # backingImageChecksum: "SHA512 checksum of the backing image"
# unmapMarkSnapChainRemoved: "ignored" # unmapMarkSnapChainRemoved: "ignored"
# diskSelector: "ssd,fast" # diskSelector: "ssd,fast"
# nfsOptions: "soft,timeo=150,retrans=3"

View File

@ -109,16 +109,16 @@ set_packages_and_check_cmd() {
detect_node_kernel_release() { detect_node_kernel_release() {
local pod="$1" local pod="$1"
KERNEL_RELEASE=$(kubectl exec -i $pod -- nsenter --mount=/proc/1/ns/mnt -- bash -c 'uname -r') KERNEL_RELEASE=$(kubectl exec $pod -- nsenter --mount=/proc/1/ns/mnt -- bash -c 'uname -r')
echo "$KERNEL_RELEASE" echo "$KERNEL_RELEASE"
} }
detect_node_os() { detect_node_os() {
local pod="$1" local pod="$1"
OS=$(kubectl exec -i $pod -- nsenter --mount=/proc/1/ns/mnt -- bash -c 'grep -E "^ID_LIKE=" /etc/os-release | cut -d= -f2') OS=$(kubectl exec $pod -- nsenter --mount=/proc/1/ns/mnt -- bash -c 'grep -E "^ID_LIKE=" /etc/os-release | cut -d= -f2')
if [[ -z "${OS}" ]]; then if [[ -z "${OS}" ]]; then
OS=$(kubectl exec -i $pod -- nsenter --mount=/proc/1/ns/mnt -- bash -c 'grep -E "^ID=" /etc/os-release | cut -d= -f2') OS=$(kubectl exec $pod -- nsenter --mount=/proc/1/ns/mnt -- bash -c 'grep -E "^ID=" /etc/os-release | cut -d= -f2')
fi fi
echo "$OS" echo "$OS"
} }
@ -273,12 +273,34 @@ check_nodes() {
fi fi
} }
verlte() {
printf '%s\n' "$1" "$2" | sort -C -V
}
verlt() {
! verlte "$2" "$1"
}
check_kernel_release() {
local pod=$1
recommended_kernel_release="5.8"
local kernel=$(detect_node_kernel_release ${pod})
if verlt "$kernel" "$recommended_kernel_release" ; then
local node=$(kubectl get ${pod} --no-headers -o=custom-columns=:.spec.nodeName)
warn "Node $node has outdated kernel release: $kernel. Recommending kernel release >= $recommended_kernel_release"
return 1
fi
}
check_iscsid() { check_iscsid() {
local pod=$1 local pod=$1
kubectl exec -t ${pod} -- nsenter --mount=/proc/1/ns/mnt -- bash -c "systemctl status --no-pager iscsid.service" > /dev/null 2>&1 kubectl exec ${pod} -- nsenter --mount=/proc/1/ns/mnt -- bash -c "systemctl status --no-pager iscsid.service" > /dev/null 2>&1
if [ $? -ne 0 ]; then if [ $? -ne 0 ]; then
kubectl exec -t ${pod} -- nsenter --mount=/proc/1/ns/mnt -- bash -c "systemctl status --no-pager iscsid.socket" > /dev/null 2>&1 kubectl exec ${pod} -- nsenter --mount=/proc/1/ns/mnt -- bash -c "systemctl status --no-pager iscsid.socket" > /dev/null 2>&1
if [ $? -ne 0 ]; then if [ $? -ne 0 ]; then
node=$(kubectl get ${pod} --no-headers -o=custom-columns=:.spec.nodeName) node=$(kubectl get ${pod} --no-headers -o=custom-columns=:.spec.nodeName)
error "Neither iscsid.service nor iscsid.socket is not running on ${node}" error "Neither iscsid.service nor iscsid.socket is not running on ${node}"
@ -290,7 +312,7 @@ check_iscsid() {
check_multipathd() { check_multipathd() {
local pod=$1 local pod=$1
kubectl exec -t $pod -- nsenter --mount=/proc/1/ns/mnt -- bash -c "systemctl status --no-pager multipathd.service" > /dev/null 2>&1 kubectl exec $pod -- nsenter --mount=/proc/1/ns/mnt -- bash -c "systemctl status --no-pager multipathd.service" > /dev/null 2>&1
if [ $? = 0 ]; then if [ $? = 0 ]; then
node=$(kubectl get ${pod} --no-headers -o=custom-columns=:.spec.nodeName) node=$(kubectl get ${pod} --no-headers -o=custom-columns=:.spec.nodeName)
warn "multipathd is running on ${node}" warn "multipathd is running on ${node}"
@ -320,7 +342,7 @@ check_packages() {
check_package() { check_package() {
local package=$1 local package=$1
kubectl exec -i $pod -- nsenter --mount=/proc/1/ns/mnt -- timeout 30 bash -c "$CHECK_CMD $package" > /dev/null 2>&1 kubectl exec $pod -- nsenter --mount=/proc/1/ns/mnt -- timeout 30 bash -c "$CHECK_CMD $package" > /dev/null 2>&1
if [ $? -ne 0 ]; then if [ $? -ne 0 ]; then
node=$(kubectl get ${pod} --no-headers -o=custom-columns=:.spec.nodeName) node=$(kubectl get ${pod} --no-headers -o=custom-columns=:.spec.nodeName)
error "$package is not found in $node." error "$package is not found in $node."
@ -341,7 +363,7 @@ check_nfs_client() {
fi fi
for option in "${options[@]}"; do for option in "${options[@]}"; do
kubectl exec -t ${pod} -- nsenter --mount=/proc/1/ns/mnt -- bash -c "[ -f /boot/config-${kernel} ]" > /dev/null 2>&1 kubectl exec ${pod} -- nsenter --mount=/proc/1/ns/mnt -- bash -c "[ -f /boot/config-${kernel} ]" > /dev/null 2>&1
if [ $? -ne 0 ]; then if [ $? -ne 0 ]; then
warn "Failed to check $option on node ${node}, because /boot/config-${kernel} does not exist on node ${node}" warn "Failed to check $option on node ${node}, because /boot/config-${kernel} does not exist on node ${node}"
continue continue
@ -368,18 +390,18 @@ check_kernel_module() {
return 1 return 1
fi fi
kubectl exec -t ${pod} -- nsenter --mount=/proc/1/ns/mnt -- bash -c "[ -e /boot/config-${kernel} ]" > /dev/null 2>&1 kubectl exec ${pod} -- nsenter --mount=/proc/1/ns/mnt -- bash -c "[ -e /boot/config-${kernel} ]" > /dev/null 2>&1
if [ $? -ne 0 ]; then if [ $? -ne 0 ]; then
warn "Failed to check kernel config option ${option}, because /boot/config-${kernel} does not exist on node ${node}" warn "Failed to check kernel config option ${option}, because /boot/config-${kernel} does not exist on node ${node}"
return 1 return 1
fi fi
value=$(kubectl exec -t ${pod} -- nsenter --mount=/proc/1/ns/mnt -- bash -c "grep "^$option=" /boot/config-${kernel} | cut -d= -f2") value=$(kubectl exec ${pod} -- nsenter --mount=/proc/1/ns/mnt -- bash -c "grep "^$option=" /boot/config-${kernel} | cut -d= -f2")
if [ -z "${value}" ]; then if [ -z "${value}" ]; then
error "Failed to find kernel config $option on node ${node}" error "Failed to find kernel config $option on node ${node}"
return 1 return 1
elif [ "${value}" = "m" ]; then elif [ "${value}" = "m" ]; then
kubectl exec -t ${pod} -- nsenter --mount=/proc/1/ns/mnt -- bash -c "lsmod | grep ${module}" > /dev/null 2>&1 kubectl exec ${pod} -- nsenter --mount=/proc/1/ns/mnt -- bash -c "lsmod | grep ${module}" > /dev/null 2>&1
if [ $? -ne 0 ]; then if [ $? -ne 0 ]; then
node=$(kubectl get ${pod} --no-headers -o=custom-columns=:.spec.nodeName) node=$(kubectl get ${pod} --no-headers -o=custom-columns=:.spec.nodeName)
error "kernel module ${module} is not enabled on ${node}" error "kernel module ${module} is not enabled on ${node}"
@ -397,7 +419,7 @@ check_hugepage() {
local pod=$1 local pod=$1
local expected_nr_hugepages=$2 local expected_nr_hugepages=$2
nr_hugepages=$(kubectl exec -i ${pod} -- nsenter --mount=/proc/1/ns/mnt -- bash -c 'cat /proc/sys/vm/nr_hugepages') nr_hugepages=$(kubectl exec ${pod} -- nsenter --mount=/proc/1/ns/mnt -- bash -c 'cat /proc/sys/vm/nr_hugepages')
if [ $? -ne 0 ]; then if [ $? -ne 0 ]; then
error "Failed to check hugepage size on node ${node}" error "Failed to check hugepage size on node ${node}"
return 1 return 1
@ -412,7 +434,7 @@ check_hugepage() {
function check_nvme_cli() { function check_nvme_cli() {
local pod=$1 local pod=$1
value=$(kubectl exec -i $pod -- nsenter --mount=/proc/1/ns/mnt -- bash -c 'nvme version' 2>/dev/null) value=$(kubectl exec $pod -- nsenter --mount=/proc/1/ns/mnt -- bash -c 'nvme version' 2>/dev/null)
if [ $? -ne 0 ]; then if [ $? -ne 0 ]; then
node=$(kubectl get ${pod} --no-headers -o=custom-columns=:.spec.nodeName) node=$(kubectl get ${pod} --no-headers -o=custom-columns=:.spec.nodeName)
error "Failed to check nvme-cli version on node ${node}" error "Failed to check nvme-cli version on node ${node}"
@ -432,14 +454,14 @@ function check_sse42_support() {
node=$(kubectl get ${pod} --no-headers -o=custom-columns=:.spec.nodeName) node=$(kubectl get ${pod} --no-headers -o=custom-columns=:.spec.nodeName)
machine=$(kubectl exec -i $pod -- nsenter --mount=/proc/1/ns/mnt -- bash -c 'uname -m' 2>/dev/null) machine=$(kubectl exec $pod -- nsenter --mount=/proc/1/ns/mnt -- bash -c 'uname -m' 2>/dev/null)
if [ $? -ne 0 ]; then if [ $? -ne 0 ]; then
error "Failed to check machine on node ${node}" error "Failed to check machine on node ${node}"
return 1 return 1
fi fi
if [ "$machine" = "x86_64" ]; then if [ "$machine" = "x86_64" ]; then
sse42_support=$(kubectl exec -i $pod -- nsenter --mount=/proc/1/ns/mnt -- bash -c 'grep -o sse4_2 /proc/cpuinfo | wc -l' 2>/dev/null) sse42_support=$(kubectl exec $pod -- nsenter --mount=/proc/1/ns/mnt -- bash -c 'grep -o sse4_2 /proc/cpuinfo | wc -l' 2>/dev/null)
if [ $? -ne 0 ]; then if [ $? -ne 0 ]; then
error "Failed to check SSE4.2 instruction set on node ${node}" error "Failed to check SSE4.2 instruction set on node ${node}"
return 1 return 1
@ -462,14 +484,14 @@ Usage: $0 [OPTIONS]
Options: Options:
-s, --enable-spdk Enable checking SPDK prerequisites -s, --enable-spdk Enable checking SPDK prerequisites
-p, --expected-nr-hugepages Expected number of hugepages for SPDK. Default: 1024 -p, --expected-nr-hugepages Expected number of 2 MiB hugepages for SPDK. Default: 512
-h, --help Show this help message and exit -h, --help Show this help message and exit
EOF EOF
exit 0 exit 0
} }
enable_spdk=false enable_spdk=false
expected_nr_hugepages=1024 expected_nr_hugepages=512
while [[ $# -gt 0 ]]; do while [[ $# -gt 0 ]]; do
opt="$1" opt="$1"
case $opt in case $opt in
@ -493,7 +515,7 @@ done
###################################################### ######################################################
# Main logics # Main logics
###################################################### ######################################################
DEPENDENCIES=("kubectl" "jq" "mktemp") DEPENDENCIES=("kubectl" "jq" "mktemp" "sort" "printf")
check_local_dependencies "${DEPENDENCIES[@]}" check_local_dependencies "${DEPENDENCIES[@]}"
# Check the each host has a unique hostname (for RWX volume) # Check the each host has a unique hostname (for RWX volume)
@ -507,6 +529,7 @@ create_ds
wait_ds_ready wait_ds_ready
check_mount_propagation check_mount_propagation
check_nodes "kernel release" check_kernel_release
check_nodes "iscsid" check_iscsid check_nodes "iscsid" check_iscsid
check_nodes "multipathd" check_multipathd check_nodes "multipathd" check_multipathd
check_nodes "packages" check_packages check_nodes "packages" check_packages

View File

@ -15,6 +15,11 @@ while [[ $# -gt 0 ]]; do
shift # past argument shift # past argument
shift # past value shift # past value
;; ;;
-p|--platform)
platform="$2"
shift # past argument
shift # past value
;;
-h|--help) -h|--help)
help="true" help="true"
shift shift
@ -28,8 +33,9 @@ while [[ $# -gt 0 ]]; do
done done
usage () { usage () {
echo "USAGE: $0 [--image-list longhorn-images.txt] [--images longhorn-images.tar.gz]" echo "USAGE: $0 [--image-list longhorn-images.txt] [--images longhorn-images.tar.gz] [--platform linux/amd64]"
echo " [-l|--images-list path] text file with list of images. 1 per line." echo " [-l|--images-list path] text file with list of images. 1 per line."
echo " [-p|--platform linux/arch] if using images-list path, pulls the image with the specified platform"
echo " [-i|--images path] tar.gz generated by docker save. If this flag is empty, the script does not export images to a tar.gz file" echo " [-i|--images path] tar.gz generated by docker save. If this flag is empty, the script does not export images to a tar.gz file"
echo " [-h|--help] Usage message" echo " [-h|--help] Usage message"
} }
@ -42,7 +48,11 @@ fi
set -e -x set -e -x
for i in $(cat ${list}); do for i in $(cat ${list}); do
docker pull ${i} if [ -n "$platform" ]; then
docker pull ${i} --platform $platform
else
docker pull ${i}
fi
done done
if [[ $images ]]; then if [[ $images ]]; then

View File

@ -66,7 +66,7 @@ rules:
- apiGroups: ["longhorn.io"] - apiGroups: ["longhorn.io"]
resources: ["volumes", "engines", "replicas", "settings", "engineimages", "nodes", "instancemanagers", "sharemanagers", resources: ["volumes", "engines", "replicas", "settings", "engineimages", "nodes", "instancemanagers", "sharemanagers",
"backingimages", "backingimagemanagers", "backingimagedatasources", "backuptargets", "backupvolumes", "backups", "backingimages", "backingimagemanagers", "backingimagedatasources", "backuptargets", "backupvolumes", "backups",
"recurringjobs", "orphans", "snapshots", "supportbundles", "systembackups", "systemrestores"] "recurringjobs", "orphans", "snapshots", "supportbundles", "systembackups", "systemrestores", "volumeattachments"]
verbs: ["*"] verbs: ["*"]
- apiGroups: ["coordination.k8s.io"] - apiGroups: ["coordination.k8s.io"]
resources: ["leases"] resources: ["leases"]
@ -106,7 +106,7 @@ spec:
spec: spec:
containers: containers:
- name: longhorn-uninstall - name: longhorn-uninstall
image: longhornio/longhorn-manager:master-head image: longhornio/longhorn-manager:v1.5.2-rc3
imagePullPolicy: IfNotPresent imagePullPolicy: IfNotPresent
command: command:
- longhorn-manager - longhorn-manager