Compare commits

...

66 Commits

Author SHA1 Message Date
davidko
45aa826cbc release: v1.4.4
Signed-off-by: davidko <dko@suse.com>
2023-10-26 22:05:02 +08:00
davidko
d0d8c9c7b4 release: v1.4.4-rc2
Signed-off-by: davidko <dko@suse.com>
2023-10-24 12:07:29 +08:00
davidko
4dc7dfdf71 chore: fix typo
Signed-off-by: davidko <dko@suse.com>
2023-10-23 19:17:35 +08:00
davidko
05d2c51a28 fix: incorrect manager image in uninstall manifest
longhorn/longhorn#6895

Signed-off-by: davidko <dko@suse.com>
2023-10-23 19:17:35 +08:00
Phan Le
1410adf090 Fix bug: check script fails to perform all checks
When piping the script to bash (cat ./environment_check.sh | bash), the
part after `kubectl exec -i` will be interpreted as the input for the
command inside kubectl exec command. As the result, the env check script
doesn't perform the steps after that kubectl exec command. Removing the
`-i` flag fixed the issue.

Also, replacing `kubectl exec -t` by `kubectl exec` because the input of
kubectl exec command is not a terminal device

longhorn-5653

Signed-off-by: Phan Le <phan.le@suse.com>
2023-10-19 21:42:59 +08:00
davidko
730d156f0a release: v1.4.4-rc1
Signed-off-by: davidko <dko@suse.com>
2023-10-16 01:48:20 +08:00
Phan Le
bda6c52a40 Add kernel release check to environment_check.sh
longhorn-6854

Signed-off-by: Phan Le <phan.le@suse.com>
(cherry picked from commit d30a970ea8)
2023-10-11 17:12:42 -07:00
James Munson
575421bb27 Fix up mergify conflict - no v2 docs in 1.4.x
Signed-off-by: James Munson <james.munson@suse.com>
2023-09-28 08:43:22 +08:00
James Munson
957036ecc7 Add nfsOptions parameter to sample storageclass.yaml
Signed-off-by: James Munson <james.munson@suse.com>
(cherry picked from commit c0a258afef)

# Conflicts:
#	examples/v2/storageclass.yaml
2023-09-28 08:43:22 +08:00
Chin-Ya Huang
79a739a227 task: use head images for security scan
ref: 6737

Signed-off-by: Chin-Ya Huang <chin-ya.huang@suse.com>
2023-09-20 20:27:03 +08:00
James Munson
c2d665512f Fix some small errors on StorageClass NodeSelector.
Signed-off-by: James Munson <james.munson@suse.com>
2023-09-06 14:50:03 -07:00
Chin-Ya Huang
72eca4017b feat(support-bundle): version bump
ref: 6544

Signed-off-by: Chin-Ya Huang <chin-ya.huang@suse.com>
(cherry picked from commit 914fb89687)
2023-08-23 13:59:54 +08:00
David Ko
1e7eb3e937 release: 1.4.3
Signed-off-by: David Ko <dko@suse.com>
2023-07-14 22:14:16 +08:00
David Ko
d8b581988f release: 1.4.3-rc2
Signed-off-by: David Ko <dko@suse.com>
2023-07-12 15:43:40 +08:00
Chin-Ya Huang
bc60ef3b99 chore(support-bundle): version bump
ref: 6256

Signed-off-by: Chin-Ya Huang <chin-ya.huang@suse.com>
(cherry picked from commit a43faae14a)
2023-07-12 10:56:20 +08:00
David Ko
46108ca75c release: 1.4.3-rc1
Signed-off-by: David Ko <dko@suse.com>
2023-06-30 16:06:50 +08:00
David Gaster
632e12beb4 ability to specify platform arch for air gap install
Signed-off-by: David Gaster <dngaster@gmail.com>
(cherry picked from commit a601ecc468)
2023-06-19 15:56:48 +08:00
Tyler Hawkins
9bab8e406e fix: (chart) fix nodeDrainPolicy key
Removing a space between the key and colon.

Signed-off-by: Tyler Hawkins <3319104+tyzbit@users.noreply.github.com>
(cherry picked from commit e45a9c04f3)
2023-06-03 09:07:14 +08:00
David Ko
97887bd5c9 release: 1.4.2
Signed-off-by: David Ko <dko@suse.com>
2023-05-12 15:49:09 +08:00
David Ko
5901cb8356 release: 1.4.2-rc1 with fixed image versions
Signed-off-by: David Ko <dko@suse.com>
2023-05-08 11:37:08 +08:00
David Ko
01e17bde7e release: 1.4.2-rc1
Signed-off-by: David Ko <dko@suse.com>
2023-05-05 16:46:47 +08:00
Shuo Wu
f8418241e5 example: Update network-policy
Signed-off-by: Shuo Wu <shuo.wu@suse.com>
(cherry picked from commit ab67f9c98c)
2023-04-26 20:11:34 +08:00
Chin-Ya Huang
e02d6a1c13 chore(support-bundle): version bump
Ref: 5614

Signed-off-by: Chin-Ya Huang <chin-ya.huang@suse.com>
(cherry picked from commit e3e006cbcc)
2023-04-19 13:30:16 +08:00
Chin-Ya Huang
89e1a50e1b chore(support-bundle): version bump
Ref: 5614

Signed-off-by: Chin-Ya Huang <chin-ya.huang@suse.com>
(cherry picked from commit e1cc7af587)
2023-04-14 19:29:03 +08:00
Chin-Ya Huang
74c4a3644e fix: merfigy conflict
Signed-off-by: Chin-Ya Huang <chin-ya.huang@suse.com>
2023-04-11 08:33:39 +08:00
Tarasovych
98e400dbde Update values.yaml
(cherry picked from commit 3f5e636bc3)
2023-04-07 12:43:40 +08:00
Chin-Ya Huang
c11a1a9071 chore(support-bundle): version bump
Ref: 5614

Signed-off-by: Chin-Ya Huang <chin-ya.huang@suse.com>
(cherry picked from commit 54e6163356)

# Conflicts:
#	deploy/longhorn-images.txt
2023-04-07 12:32:47 +08:00
James Lu
fbeddd204b feat(recurring-job): update chart for new tasks
Ref: 4898

Signed-off-by: James Lu <james.lu@suse.com>
(cherry picked from commit 15701bbe26)
2023-03-20 16:21:40 +08:00
James Lu
61162f2028 feat(recurring-job): update YAML for new tasks
Ref: 4898

Signed-off-by: James Lu <james.lu@suse.com>
(cherry picked from commit 6c6cb23be1)
2023-03-20 16:21:40 +08:00
Ray Chang
e868297b9e fix(support-bundle): version bump to v0.0.20
Longhorn 5073

- New parameter: `SUPPORT_BUNDLE_COLLECTOR` to execute specified support-bundle-kit collector

Signed-off-by: Ray Chang <ray.chang@suse.com>
(cherry picked from commit 9abb26714b)
2023-03-20 11:27:49 +08:00
Phan Le
d32c6ed933 Add nodeDrainPolicy setting
longhorn-5549

Signed-off-by: Phan Le <phan.le@suse.com>
(cherry picked from commit 86d06696df)
2023-03-18 08:28:02 +08:00
David Ko
fa1a441cf7 docs: typo in lep
Signed-off-by: David Ko <dko@suse.com>
2023-03-13 18:04:21 +08:00
David Ko
f29950c4e1 release: 1.4.1
Signed-off-by: David Ko <dko@suse.com>
2023-03-13 18:04:21 +08:00
ChanYiLin
bf083b2d49 doc: update prerequisites in chart readme to make it consistent with documentation
Signed-off-by: Jack Lin <jack.lin@suse.com>
2023-03-13 10:36:14 +08:00
Viktor Hedefalk
662fbbaabe Update data_migration.yaml
Fixes #5484

(cherry picked from commit 92fd5b54ed)
2023-03-08 15:59:26 +08:00
David Ko
6aa0c28f0e release: 1.4.1-rc2
ref: longhorn/longhorn#5445

Signed-off-by: David Ko <dko@suse.com>
2023-03-06 22:11:15 +08:00
Chin-Ya Huang
1984b8c51e fix(support-bundle): version bump
- fix support-bundle agent missing registry secret

Ref: 5467

Signed-off-by: Chin-Ya Huang <chin-ya.huang@suse.com>
(cherry picked from commit 5a3f8d714b)
2023-03-03 11:09:51 +08:00
Rayan Das
6c631417a1 update k8s.gcr.io to registry.k8s.io
Signed-off-by: Rayan Das <rayandas91@gmail.com>
(cherry picked from commit e1ea3d7515)
2023-03-01 15:07:33 +08:00
David Ko
4e2f0dd488 release: 1.4.1-rc1
ref: longhorn/longhorn#5445

Signed-off-by: David Ko <dko@suse.com>
2023-02-24 13:00:20 +08:00
Chin-Ya Huang
c679fcad7d feat(recurring-job): update YAML for new tasks
Ref: 3836

Signed-off-by: Chin-Ya Huang <chin-ya.huang@suse.com>
(cherry picked from commit 2ea5513286)
2023-02-22 12:25:30 +08:00
Chin-Ya Huang
cde21dda79 feat(recurring-job): update chart for new tasks
Ref: 3836

Signed-off-by: Chin-Ya Huang <chin-ya.huang@suse.com>
(cherry picked from commit 761abc7611)
2023-02-22 12:25:30 +08:00
Chin-Ya Huang
034d5a2f31 fix(crd): update YAML
Signed-off-by: Chin-Ya Huang <chin-ya.huang@suse.com>
(cherry picked from commit 4b17f8fbcd)
2023-02-17 15:15:17 +08:00
Chin-Ya Huang
36657dc0bd fix(crd): update chart
Signed-off-by: Chin-Ya Huang <chin-ya.huang@suse.com>
(cherry picked from commit 8c5dd01964)
2023-02-17 15:15:17 +08:00
Phan Le
61d3be3c3b Update PSP validation
Longhorn-5339

Signed-off-by: Phan Le <phan.le@suse.com>
(cherry picked from commit 1deb51287b)
2023-02-07 16:12:43 +08:00
achims311
837778f389 Fix for bug #5304 (second version including POSIX way to call subroutine) (#5314)
* Fix for bug #5304.

It uses the same technologie to get the kernel release as it was used
before to get the os of the node

Signed-off-by: Achim Schaefer <longhorn@schaefer-home.eu>

* used a lower case variable name as suggested by innobead

Signed-off-by: Achim Schaefer <longhorn@schaefer-home.eu>

---------

Signed-off-by: Achim Schaefer <longhorn@schaefer-home.eu>
Co-authored-by: David Ko <dko@suse.com>
(cherry picked from commit 94a23e5b05)
2023-02-07 15:00:33 +08:00
Haribo112
446fb68bbe Made environment_check.sh POSIX compliant (#5310)
Made environment_check.sh POSIX compliant

Signed-off-by: Harold Holsappel <h.holsappel@iwink.nl>
Co-authored-by: Harold Holsappel <h.holsappel@iwink.nl>
(cherry picked from commit 5a071e502c)
2023-02-06 18:06:21 +08:00
Thomas Fenzl
3c3931b31c update iscsi installation image to latest alpine.
(cherry picked from commit 674cdd0df0)
2023-02-05 23:17:49 +08:00
David Ko
ad3030cd33 fix: wrong indentation of priorityClassName in deployment-webhook.yaml
Signed-off-by: David Ko <dko@suse.com>
(cherry picked from commit d8a5c4ffd5)
2023-02-05 23:04:32 +08:00
Ray Chang
02023afd9d fix: update the supportBundleKit image description
Signed-off-by: Ray Chang <ray.chang@suse.com>
(cherry picked from commit ccf3740b5b)
2023-01-12 15:45:19 +08:00
Ray Chang
91e9d412b6 fix: add Support Bundle Kit image related variables in questions.yaml
Signed-off-by: Ray Chang <ray.chang@suse.com>
(cherry picked from commit 4250b68b0f)
2023-01-12 11:00:39 +08:00
Phan Le
5569b600c6 Update uninstallation info to include the 'Deleting Confirmation Flag' in chart
longhorn-5250

Signed-off-by: Phan Le <phan.le@suse.com>
(cherry picked from commit 69dcfa5277)
2023-01-11 14:58:05 +08:00
Ray Chang
bbe8eaf4c1 fix: fix the CSI Liveness Prob group in questions.yaml
Signed-off-by: Ray Chang <ray.chang@suse.com>
(cherry picked from commit a7e4b23350)
2023-01-11 11:17:38 +08:00
Ray Chang
224abcb02b fix: Correct formatting error in question.yaml file
Signed-off-by: Ray Chang <ray.chang@suse.com>
(cherry picked from commit 145b166720)
2023-01-05 18:00:05 +08:00
James Lu
5fd2416e40 fix: refine the indentation
The indentation of chart/questions.yaml in
`variable: defaultSettings.restoreVolumeRecurringJobs` is not
corrcet.

ref: 5196

Signed-off-by: James Lu <james.lu@suse.com>
(cherry picked from commit b06ce86784)
2023-01-05 13:32:10 +08:00
David Ko
42c23e0a51 build: 1.4.0
Signed-off-by: David Ko <dko@suse.com>
2022-12-30 12:49:59 +08:00
David Ko
68afe8acc0 build: 1.4.0-rc3
Signed-off-by: David Ko <dko@suse.com>
2022-12-28 09:17:25 +08:00
Derek Su
c32192c4d2 environment check: precisely check kernel option
Longhorn 3157

Signed-off-by: Derek Su <derek.su@suse.com>
(cherry picked from commit 62998adab2)
2022-12-26 20:24:19 +08:00
Derek Su
4fd6aac2ac environment_check.sh: add nfs client kernel support
Longhorn 3157

Signed-off-by: Derek Su <derek.su@suse.com>
(cherry picked from commit c83497b685)
2022-12-26 16:12:06 +08:00
Chin-Ya Huang
0db89e8f79 fix(uninstall): missing resource in ClusterRole
Ref: 5132, 5133

Signed-off-by: Chin-Ya Huang <chin-ya.huang@suse.com>
(cherry picked from commit 38aa0d01d5)
2022-12-23 13:56:37 +08:00
David Ko
f48a8ee582 build: 1.4.0-rc2
Signed-off-by: David Ko <dko@suse.com>
2022-12-21 21:36:51 +08:00
Derek Su
db0d2d9359 chart: add replicaFileSyncHttpClientTimeout
Longhorn 5110

Signed-off-by: Derek Su <derek.su@suse.com>
2022-12-21 15:01:04 +08:00
James Lu
5369011477 build(image): bump support-bundle-kit
bump support-bundle-kit version to v0.0.17

Ref: 5107

Signed-off-by: James Lu <james.lu@suse.com>
2022-12-20 17:15:59 +08:00
Derek Su
ffca869ff3 chart: support customized number of replicas of webhook and recovery-backend
Longhorn 5087

Signed-off-by: Derek Su <derek.su@suse.com>
2022-12-16 20:41:06 +08:00
James Lu
404956f789 chore(ui): modify Affinity of UI for helm chart
Change the number of the replica from 1 to 2 for helm chart

Ref: 4987

Signed-off-by: James Lu <james.lu@suse.com>
2022-12-15 18:42:06 +08:00
James Lu
be24195384 chore(ui): modify Affinity of UI in deploy.yaml
Change the number of the replica from 1 to 2.

Ref: 4987

Signed-off-by: James Lu <james.lu@suse.com>
2022-12-15 18:42:06 +08:00
David Ko
bb8e9a143b build: 1.4.0-rc1
Signed-off-by: David Ko <dko@suse.com>
2022-12-13 15:34:37 +08:00
25 changed files with 403 additions and 168 deletions

View File

@ -1,7 +1,7 @@
apiVersion: v1
name: longhorn
version: 1.4.0-dev
appVersion: v1.4.0-dev
version: 1.4.4
appVersion: v1.4.4
kubeVersion: ">=1.21.0-0"
description: Longhorn is a distributed block storage system for Kubernetes.
keywords:

View File

@ -18,10 +18,24 @@ Longhorn is 100% open source software. Project source code is spread across a nu
## Prerequisites
1. A container runtime compatible with Kubernetes (Docker v1.13+, containerd v1.3.7+, etc.)
2. Kubernetes v1.18+
2. Kubernetes >= v1.21
3. Make sure `bash`, `curl`, `findmnt`, `grep`, `awk` and `blkid` has been installed in all nodes of the Kubernetes cluster.
4. Make sure `open-iscsi` has been installed, and the `iscsid` daemon is running on all nodes of the Kubernetes cluster. For GKE, recommended Ubuntu as guest OS image since it contains `open-iscsi` already.
## Upgrading to Kubernetes v1.25+
Starting in Kubernetes v1.25, [Pod Security Policies](https://kubernetes.io/docs/concepts/security/pod-security-policy/) have been removed from the Kubernetes API.
As a result, **before upgrading to Kubernetes v1.25** (or on a fresh install in a Kubernetes v1.25+ cluster), users are expected to perform an in-place upgrade of this chart with `enablePSP` set to `false` if it has been previously set to `true`.
> **Note:**
> If you upgrade your cluster to Kubernetes v1.25+ before removing PSPs via a `helm upgrade` (even if you manually clean up resources), **it will leave the Helm release in a broken state within the cluster such that further Helm operations will not work (`helm uninstall`, `helm upgrade`, etc.).**
>
> If your charts get stuck in this state, you may have to clean up your Helm release secrets.
Upon setting `enablePSP` to false, the chart will remove any PSP resources deployed on its behalf from the cluster. This is the default setting for this chart.
As a replacement for PSPs, [Pod Security Admission](https://kubernetes.io/docs/concepts/security/pod-security-admission/) should be used. Please consult the Longhorn docs for more details on how to configure your chart release namespaces to work with the new Pod Security Admission and apply Pod Security Standards.
## Installation
1. Add Longhorn chart repository.
```
@ -49,11 +63,13 @@ helm install longhorn longhorn/longhorn --namespace longhorn-system
With Helm 2 to uninstall Longhorn.
```
kubectl -n longhorn-system patch -p '{"value": "true"}' --type=merge lhs deleting-confirmation-flag
helm delete longhorn --purge
```
With Helm 3 to uninstall Longhorn.
```
kubectl -n longhorn-system patch -p '{"value": "true"}' --type=merge lhs deleting-confirmation-flag
helm uninstall longhorn -n longhorn-system
kubectl delete namespace longhorn-system
```

View File

@ -17,7 +17,7 @@ questions:
label: Longhorn Manager Image Repository
group: "Longhorn Images Settings"
- variable: image.longhorn.manager.tag
default: master-head
default: v1.4.4
description: "Specify Longhorn Manager Image Tag"
type: string
label: Longhorn Manager Image Tag
@ -29,7 +29,7 @@ questions:
label: Longhorn Engine Image Repository
group: "Longhorn Images Settings"
- variable: image.longhorn.engine.tag
default: master-head
default: v1.4.4
description: "Specify Longhorn Engine Image Tag"
type: string
label: Longhorn Engine Image Tag
@ -41,7 +41,7 @@ questions:
label: Longhorn UI Image Repository
group: "Longhorn Images Settings"
- variable: image.longhorn.ui.tag
default: master-head
default: v1.4.4
description: "Specify Longhorn UI Image Tag"
type: string
label: Longhorn UI Image Tag
@ -53,7 +53,7 @@ questions:
label: Longhorn Instance Manager Image Repository
group: "Longhorn Images Settings"
- variable: image.longhorn.instanceManager.tag
default: v2_20221123
default: v1.4.4
description: "Specify Longhorn Instance Manager Image Tag"
type: string
label: Longhorn Instance Manager Image Tag
@ -65,7 +65,7 @@ questions:
label: Longhorn Share Manager Image Repository
group: "Longhorn Images Settings"
- variable: image.longhorn.shareManager.tag
default: v1_20220914
default: v1.4.4
description: "Specify Longhorn Share Manager Image Tag"
type: string
label: Longhorn Share Manager Image Tag
@ -77,11 +77,23 @@ questions:
label: Longhorn Backing Image Manager Image Repository
group: "Longhorn Images Settings"
- variable: image.longhorn.backingImageManager.tag
default: v3_20220808
default: v1.4.4
description: "Specify Longhorn Backing Image Manager Image Tag"
type: string
label: Longhorn Backing Image Manager Image Tag
group: "Longhorn Images Settings"
- variable: image.longhorn.supportBundleKit.repository
default: longhornio/support-bundle-kit
description: "Specify Longhorn Support Bundle Manager Image Repository"
type: string
label: Longhorn Support Bundle Kit Image Repository
group: "Longhorn Images Settings"
- variable: image.longhorn.supportBundleKit.tag
default: v0.0.27
description: "Specify Longhorn Support Bundle Manager Image Tag"
type: string
label: Longhorn Support Bundle Kit Image Tag
group: "Longhorn Images Settings"
- variable: image.csi.attacher.repository
default: longhornio/csi-attacher
description: "Specify CSI attacher image repository. Leave blank to autodetect."
@ -147,7 +159,7 @@ questions:
description: "Specify CSI liveness probe image repository. Leave blank to autodetect."
type: string
label: Longhorn CSI Liveness Probe Image Repository
group: "Longhorn CSI Liveness Probe Images"
group: "Longhorn CSI Driver Images"
- variable: image.csi.livenessProbe.tag
default: v2.8.0
description: "Specify CSI liveness probe image tag. Leave blank to autodetect."
@ -365,7 +377,7 @@ The available volume setting options are:
default: "false"
- variable: defaultSettings.recurringSuccessfulJobsHistoryLimit
label: Cronjob Successful Jobs History Limit
description: "This setting specifies how many successful backup or snapshot job histories should be retained. History will not be retained if the value is 0.",
description: "This setting specifies how many successful backup or snapshot job histories should be retained. History will not be retained if the value is 0."
group: "Longhorn Default Settings"
type: int
min: 0
@ -379,9 +391,9 @@ The available volume setting options are:
default: 1
- variable: defaultSettings.supportBundleFailedHistoryLimit
label: SupportBundle Failed History Limit
description: This setting specifies how many failed support bundles can exist in the cluster.
description: "This setting specifies how many failed support bundles can exist in the cluster.
The retained failed support bundle is for analysis purposes and needs to clean up manually.
Set this value to **0** to have Longhorn automatically purge all failed support bundles.
Set this value to **0** to have Longhorn automatically purge all failed support bundles."
group: "Longhorn Default Settings"
type: int
min: 0
@ -434,6 +446,19 @@ If this setting is enabled, Longhorn will **not** block `kubectl drain` action o
group: "Longhorn Default Settings"
type: boolean
default: "false"
- variable: defaultSettings.nodeDrainPolicy
label: Node Drain Policy
description: "Define the policy to use when a node with the last healthy replica of a volume is drained.
- **block-if-contains-last-replica** Longhorn will block the drain when the node contains the last healthy replica of a volume.
- **allow-if-replica-is-stopped** Longhorn will allow the drain when the node contains the last healthy replica of a volume but the replica is stopped. WARNING: possible data loss if the node is removed after draining. Select this option if you want to drain the node and do in-place upgrade/maintenance.
- **always-allow** Longhorn will allow the drain even though the node contains the last healthy replica of a volume. WARNING: possible data loss if the node is removed after draining. Also possible data corruption if the last replica was running during the draining."
group: "Longhorn Default Settings"
type: enum
options:
- "block-if-contains-last-replica"
- "allow-if-replica-is-stopped"
- "always-allow"
default: "block-if-contains-last-replica"
- variable: defaultSettings.mkfsExt4Parameters
label: Custom mkfs.ext4 parameters
description: "Allows setting additional filesystem creation parameters for ext4. For older host kernels it might be necessary to disable the optional ext4 metadata_csum feature by specifying `-O ^64bit,^metadata_csum`."
@ -642,6 +667,12 @@ Set the value to **0** to disable backup restore."
group: "Longhorn Default Settings"
type: boolean
default: false
- variable: defaultSettings.replicaFileSyncHttpClientTimeout
label: Timeout of HTTP Client to Replica File Sync Server
description: "In seconds. The setting specifies the HTTP client timeout to the file sync server."
group: "Longhorn Default Settings"
type: int
default: "30"
- variable: persistence.defaultClass
default: "true"
description: "Set as default StorageClass for Longhorn"
@ -690,18 +721,18 @@ Set the value to **0** to disable backup restore."
group: "Longhorn Storage Class Settings"
type: string
default:
- variable: defaultSettings.defaultNodeSelector.enable
description: "Enable recurring Node selector for Longhorn StorageClass"
- variable: persistence.defaultNodeSelector.enable
description: "Enable Node selector for Longhorn StorageClass"
group: "Longhorn Storage Class Settings"
label: Enable Storage Class Node Selector
type: boolean
default: false
show_subquestion_if: true
subquestions:
- variable: defaultSettings.defaultNodeSelector.selector
- variable: persistence.defaultNodeSelector.selector
label: Storage Class Node Selector
description: 'We use NodeSelector when we want to bind PVC via StorageClass into desired mountpoint on the nodes tagged whith its value'
group: "Longhorn Default Settings"
description: 'We use NodeSelector when we want to bind PVC via StorageClass into desired mountpoint on the nodes tagged with its value'
group: "Longhorn Storage Class Settings"
type: string
default:
- variable: persistence.backingImage.enable
@ -806,7 +837,7 @@ Set the value to **0** to disable backup restore."
show_if: "service.ui.type=NodePort||service.ui.type=LoadBalancer"
label: UI Service NodePort number
- variable: enablePSP
default: "true"
default: "false"
description: "Setup a pod security policy for Longhorn workloads."
label: Pod Security Policy
type: boolean

View File

@ -1669,7 +1669,7 @@ spec:
description: InstanceManagerSpec defines the desired state of the Longhorn instancer manager
properties:
engineImage:
description: 'TODO: deprecate this field'
description: 'Deprecated: This field is useless.'
type: string
image:
type: string
@ -2153,7 +2153,7 @@ spec:
jsonPath: .spec.groups
name: Groups
type: string
- description: Should be one of "backup" or "snapshot"
- description: Should be one of "snapshot", "snapshot-force-create", "snapshot-cleanup", "snapshot-delete", "backup" or "backup-force-create"
jsonPath: .spec.task
name: Task
type: string
@ -2215,10 +2215,14 @@ spec:
description: The retain count of the snapshot/backup.
type: integer
task:
description: The recurring job type. Can be "snapshot" or "backup".
description: The recurring job task. Can be "snapshot", "snapshot-force-create", "snapshot-cleanup", "snapshot-delete", "backup" or "backup-force-create".
enum:
- snapshot
- snapshot-force-create
- snapshot-cleanup
- snapshot-delete
- backup
- backup-force-create
type: string
type: object
status:
@ -3290,7 +3294,7 @@ spec:
recurringJobs:
description: Deprecated. Replaced by a separate resource named "RecurringJob"
items:
description: 'VolumeRecurringJobSpec is a deprecated struct. TODO: Should be removed when recurringJobs gets removed from the volume spec.'
description: 'Deprecated: This field is useless and has been replaced by the RecurringJob CRD'
properties:
concurrency:
type: integer
@ -3311,7 +3315,11 @@ spec:
task:
enum:
- snapshot
- snapshot-force-create
- snapshot-cleanup
- snapshot-delete
- backup
- backup-force-create
type: string
type: object
type: array

View File

@ -52,6 +52,7 @@ data:
{{ if not (kindIs "invalid" .Values.defaultSettings.replicaZoneSoftAntiAffinity) }}replica-zone-soft-anti-affinity: {{ .Values.defaultSettings.replicaZoneSoftAntiAffinity }}{{ end }}
{{ if not (kindIs "invalid" .Values.defaultSettings.nodeDownPodDeletionPolicy) }}node-down-pod-deletion-policy: {{ .Values.defaultSettings.nodeDownPodDeletionPolicy }}{{ end }}
{{ if not (kindIs "invalid" .Values.defaultSettings.allowNodeDrainWithLastHealthyReplica) }}allow-node-drain-with-last-healthy-replica: {{ .Values.defaultSettings.allowNodeDrainWithLastHealthyReplica }}{{ end }}
{{ if not (kindIs "invalid" .Values.defaultSettings.nodeDrainPolicy) }}node-drain-policy: {{ .Values.defaultSettings.nodeDrainPolicy }}{{ end }}
{{ if not (kindIs "invalid" .Values.defaultSettings.mkfsExt4Parameters) }}mkfs-ext4-parameters: {{ .Values.defaultSettings.mkfsExt4Parameters }}{{ end }}
{{ if not (kindIs "invalid" .Values.defaultSettings.disableReplicaRebuild) }}disable-replica-rebuild: {{ .Values.defaultSettings.disableReplicaRebuild }}{{ end }}
{{ if not (kindIs "invalid" .Values.defaultSettings.replicaReplenishmentWaitInterval) }}replica-replenishment-wait-interval: {{ .Values.defaultSettings.replicaReplenishmentWaitInterval }}{{ end }}
@ -76,3 +77,4 @@ data:
{{ if not (kindIs "invalid" .Values.defaultSettings.snapshotDataIntegrityCronjob) }}snapshot-data-integrity-cronjob: {{ .Values.defaultSettings.snapshotDataIntegrityCronjob }}{{ end }}
{{ if not (kindIs "invalid" .Values.defaultSettings.removeSnapshotsDuringFilesystemTrim) }}remove-snapshots-during-filesystem-trim: {{ .Values.defaultSettings.removeSnapshotsDuringFilesystemTrim }}{{ end }}
{{ if not (kindIs "invalid" .Values.defaultSettings.fastReplicaRebuildEnabled) }}fast-replica-rebuild-enabled: {{ .Values.defaultSettings.fastReplicaRebuildEnabled }}{{ end }}
{{ if not (kindIs "invalid" .Values.defaultSettings.replicaFileSyncHttpClientTimeout) }}replica-file-sync-http-client-timeout: {{ .Values.defaultSettings.replicaFileSyncHttpClientTimeout }}{{ end }}

View File

@ -6,7 +6,7 @@ metadata:
name: longhorn-recovery-backend
namespace: {{ include "release_namespace" . }}
spec:
replicas: 2
replicas: {{ .Values.longhornRecoveryBackend.replicas }}
selector:
matchLabels:
app: longhorn-recovery-backend
@ -59,15 +59,25 @@ spec:
imagePullSecrets:
- name: {{ .Values.privateRegistry.registrySecret }}
{{- end }}
{{- if .Values.longhornDriver.priorityClass }}
priorityClassName: {{ .Values.longhornDriver.priorityClass | quote}}
{{- if .Values.longhornRecoveryBackend.priorityClass }}
priorityClassName: {{ .Values.longhornRecoveryBackend.priorityClass | quote }}
{{- end }}
{{- if .Values.longhornDriver.tolerations }}
{{- if or .Values.longhornRecoveryBackend.tolerations .Values.global.cattle.windowsCluster.enabled }}
tolerations:
{{ toYaml .Values.longhornDriver.tolerations | indent 6 }}
{{- if and .Values.global.cattle.windowsCluster.enabled .Values.global.cattle.windowsCluster.tolerations }}
{{ toYaml .Values.global.cattle.windowsCluster.tolerations | indent 6 }}
{{- end }}
{{- if .Values.longhornDriver.nodeSelector }}
{{- if .Values.longhornRecoveryBackend.tolerations }}
{{ toYaml .Values.longhornRecoveryBackend.tolerations | indent 6 }}
{{- end }}
{{- end }}
{{- if or .Values.longhornRecoveryBackend.nodeSelector .Values.global.cattle.windowsCluster.enabled }}
nodeSelector:
{{ toYaml .Values.longhornDriver.nodeSelector | indent 8 }}
{{- if and .Values.global.cattle.windowsCluster.enabled .Values.global.cattle.windowsCluster.nodeSelector }}
{{ toYaml .Values.global.cattle.windowsCluster.nodeSelector | indent 8 }}
{{- end }}
{{- if .Values.longhornRecoveryBackend.nodeSelector }}
{{ toYaml .Values.longhornRecoveryBackend.nodeSelector | indent 8 }}
{{- end }}
{{- end }}
serviceAccountName: longhorn-service-account

View File

@ -15,6 +15,18 @@ spec:
labels: {{- include "longhorn.labels" . | nindent 8 }}
app: longhorn-ui
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- longhorn-ui
topologyKey: kubernetes.io/hostname
containers:
- name: longhorn-ui
image: {{ template "registry_url" . }}{{ .Values.image.longhorn.ui.repository }}:{{ .Values.image.longhorn.ui.tag }}

View File

@ -6,7 +6,7 @@ metadata:
name: longhorn-conversion-webhook
namespace: {{ include "release_namespace" . }}
spec:
replicas: 2
replicas: {{ .Values.longhornConversionWebhook.replicas }}
selector:
matchLabels:
app: longhorn-conversion-webhook
@ -53,25 +53,25 @@ spec:
imagePullSecrets:
- name: {{ .Values.privateRegistry.registrySecret }}
{{- end }}
{{- if .Values.longhornDriver.priorityClass }}
priorityClassName: {{ .Values.longhornDriver.priorityClass | quote }}
{{- if .Values.longhornConversionWebhook.priorityClass }}
priorityClassName: {{ .Values.longhornConversionWebhook.priorityClass | quote }}
{{- end }}
{{- if or .Values.longhornDriver.tolerations .Values.global.cattle.windowsCluster.enabled }}
{{- if or .Values.longhornConversionWebhook.tolerations .Values.global.cattle.windowsCluster.enabled }}
tolerations:
{{- if and .Values.global.cattle.windowsCluster.enabled .Values.global.cattle.windowsCluster.tolerations }}
{{ toYaml .Values.global.cattle.windowsCluster.tolerations | indent 6 }}
{{- end }}
{{- if .Values.longhornDriver.tolerations }}
{{ toYaml .Values.longhornDriver.tolerations | indent 6 }}
{{- if .Values.longhornConversionWebhook.tolerations }}
{{ toYaml .Values.longhornConversionWebhook.tolerations | indent 6 }}
{{- end }}
{{- end }}
{{- if or .Values.longhornDriver.nodeSelector .Values.global.cattle.windowsCluster.enabled }}
{{- if or .Values.longhornConversionWebhook.nodeSelector .Values.global.cattle.windowsCluster.enabled }}
nodeSelector:
{{- if and .Values.global.cattle.windowsCluster.enabled .Values.global.cattle.windowsCluster.nodeSelector }}
{{ toYaml .Values.global.cattle.windowsCluster.nodeSelector | indent 8 }}
{{- end }}
{{- if .Values.longhornDriver.nodeSelector }}
{{ toYaml .Values.longhornDriver.nodeSelector | indent 8 }}
{{- if .Values.longhornConversionWebhook.nodeSelector }}
{{ toYaml .Values.longhornConversionWebhook.nodeSelector | indent 8 }}
{{- end }}
{{- end }}
serviceAccountName: longhorn-service-account
@ -84,7 +84,7 @@ metadata:
name: longhorn-admission-webhook
namespace: {{ include "release_namespace" . }}
spec:
replicas: 2
replicas: {{ .Values.longhornAdmissionWebhook.replicas }}
selector:
matchLabels:
app: longhorn-admission-webhook
@ -142,25 +142,25 @@ spec:
imagePullSecrets:
- name: {{ .Values.privateRegistry.registrySecret }}
{{- end }}
{{- if .Values.longhornDriver.priorityClass }}
priorityClassName: {{ .Values.longhornDriver.priorityClass | quote }}
{{- if .Values.longhornAdmissionWebhook.priorityClass }}
priorityClassName: {{ .Values.longhornAdmissionWebhook.priorityClass | quote }}
{{- end }}
{{- if or .Values.longhornDriver.tolerations .Values.global.cattle.windowsCluster.enabled }}
{{- if or .Values.longhornAdmissionWebhook.tolerations .Values.global.cattle.windowsCluster.enabled }}
tolerations:
{{- if and .Values.global.cattle.windowsCluster.enabled .Values.global.cattle.windowsCluster.tolerations }}
{{ toYaml .Values.global.cattle.windowsCluster.tolerations | indent 6 }}
{{- end }}
{{- if .Values.longhornDriver.tolerations }}
{{ toYaml .Values.longhornDriver.tolerations | indent 6 }}
{{- if .Values.longhornAdmissionWebhook.tolerations }}
{{ toYaml .Values.longhornAdmissionWebhook.tolerations | indent 6 }}
{{- end }}
{{- end }}
{{- if or .Values.longhornDriver.nodeSelector .Values.global.cattle.windowsCluster.enabled }}
{{- if or .Values.longhornAdmissionWebhook.nodeSelector .Values.global.cattle.windowsCluster.enabled }}
nodeSelector:
{{- if and .Values.global.cattle.windowsCluster.enabled .Values.global.cattle.windowsCluster.nodeSelector }}
{{ toYaml .Values.global.cattle.windowsCluster.nodeSelector | indent 8 }}
{{- end }}
{{- if or .Values.longhornDriver.nodeSelector }}
{{ toYaml .Values.longhornDriver.nodeSelector | indent 8 }}
{{- if .Values.longhornAdmissionWebhook.nodeSelector }}
{{ toYaml .Values.longhornAdmissionWebhook.nodeSelector | indent 8 }}
{{- end }}
{{- end }}
serviceAccountName: longhorn-service-account

View File

@ -0,0 +1,7 @@
#{{- if gt (len (lookup "rbac.authorization.k8s.io/v1" "ClusterRole" "" "")) 0 -}}
#{{- if .Values.enablePSP }}
#{{- if not (.Capabilities.APIVersions.Has "policy/v1beta1/PodSecurityPolicy") }}
#{{- fail "The target cluster does not have the PodSecurityPolicy API resource. Please disable PSPs in this chart before proceeding." -}}
#{{- end }}
#{{- end }}
#{{- end }}

View File

@ -25,25 +25,25 @@ image:
longhorn:
engine:
repository: longhornio/longhorn-engine
tag: master-head
tag: v1.4.4
manager:
repository: longhornio/longhorn-manager
tag: master-head
tag: v1.4.4
ui:
repository: longhornio/longhorn-ui
tag: master-head
tag: v1.4.4
instanceManager:
repository: longhornio/longhorn-instance-manager
tag: master-head
tag: v1.4.4
shareManager:
repository: longhornio/longhorn-share-manager
tag: master-head
tag: v1.4.4
backingImageManager:
repository: longhornio/backing-image-manager
tag: master-head
tag: v1.4.4
supportBundleKit:
repository: longhornio/support-bundle-kit
tag: v0.0.16
tag: v0.0.27
csi:
attacher:
repository: longhornio/csi-attacher
@ -94,7 +94,7 @@ persistence:
expectedChecksum: ~
defaultNodeSelector:
enable: false # disable by default
selector: []
selector: ""
removeSnapshotsDuringFilesystemTrim: ignored # "enabled" or "disabled" otherwise
csi:
@ -133,6 +133,7 @@ defaultSettings:
replicaZoneSoftAntiAffinity: ~
nodeDownPodDeletionPolicy: ~
allowNodeDrainWithLastHealthyReplica: ~
nodeDrainPolicy: ~
mkfsExt4Parameters: ~
disableReplicaRebuild: ~
replicaReplenishmentWaitInterval: ~
@ -157,6 +158,7 @@ defaultSettings:
snapshotDataIntegrityCronjob: ~
removeSnapshotsDuringFilesystemTrim: ~
fastReplicaRebuildEnabled: ~
replicaFileSyncHttpClientTimeout: ~
privateRegistry:
createSecret: ~
registryUrl: ~
@ -203,7 +205,7 @@ longhornDriver:
# label-key2: "label-value2"
longhornUI:
replicas: 1
replicas: 2
priorityClass: ~
tolerations: []
## If you want to set tolerations for Longhorn UI Deployment, delete the `[]` in the line above
@ -218,6 +220,54 @@ longhornUI:
# label-key1: "label-value1"
# label-key2: "label-value2"
longhornConversionWebhook:
replicas: 2
priorityClass: ~
tolerations: []
## If you want to set tolerations for Longhorn conversion webhook Deployment, delete the `[]` in the line above
## and uncomment this example block
# - key: "key"
# operator: "Equal"
# value: "value"
# effect: "NoSchedule"
nodeSelector: {}
## If you want to set node selector for Longhorn conversion webhook Deployment, delete the `{}` in the line above
## and uncomment this example block
# label-key1: "label-value1"
# label-key2: "label-value2"
longhornAdmissionWebhook:
replicas: 2
priorityClass: ~
tolerations: []
## If you want to set tolerations for Longhorn admission webhook Deployment, delete the `[]` in the line above
## and uncomment this example block
# - key: "key"
# operator: "Equal"
# value: "value"
# effect: "NoSchedule"
nodeSelector: {}
## If you want to set node selector for Longhorn admission webhook Deployment, delete the `{}` in the line above
## and uncomment this example block
# label-key1: "label-value1"
# label-key2: "label-value2"
longhornRecoveryBackend:
replicas: 2
priorityClass: ~
tolerations: []
## If you want to set tolerations for Longhorn recovery backend Deployment, delete the `[]` in the line above
## and uncomment this example block
# - key: "key"
# operator: "Equal"
# value: "value"
# effect: "NoSchedule"
nodeSelector: {}
## If you want to set node selector for Longhorn recovery backend Deployment, delete the `{}` in the line above
## and uncomment this example block
# label-key1: "label-value1"
# label-key2: "label-value2"
ingress:
## Set to true to enable ingress record generation
enabled: false

View File

@ -4,10 +4,10 @@ longhornio/csi-resizer:v1.3.0
longhornio/csi-snapshotter:v5.0.1
longhornio/csi-node-driver-registrar:v2.5.0
longhornio/livenessprobe:v2.8.0
longhornio/backing-image-manager:master-head
longhornio/longhorn-engine:master-head
longhornio/longhorn-instance-manager:master-head
longhornio/longhorn-manager:master-head
longhornio/longhorn-share-manager:master-head
longhornio/longhorn-ui:master-head
longhornio/support-bundle-kit:v0.0.16
longhornio/backing-image-manager:v1.4.4
longhornio/longhorn-engine:v1.4.4
longhornio/longhorn-instance-manager:v1.4.4
longhornio/longhorn-manager:v1.4.4
longhornio/longhorn-share-manager:v1.4.4
longhornio/longhorn-ui:v1.4.4
longhornio/support-bundle-kit:v0.0.27

View File

@ -14,7 +14,7 @@ metadata:
labels:
app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev
app.kubernetes.io/version: v1.4.4
---
# Source: longhorn/templates/serviceaccount.yaml
apiVersion: v1
@ -25,7 +25,7 @@ metadata:
labels:
app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev
app.kubernetes.io/version: v1.4.4
---
# Source: longhorn/templates/default-setting.yaml
apiVersion: v1
@ -36,7 +36,7 @@ metadata:
labels:
app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev
app.kubernetes.io/version: v1.4.4
data:
default-setting.yaml: |-
---
@ -49,7 +49,7 @@ metadata:
labels:
app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev
app.kubernetes.io/version: v1.4.4
data:
storageclass.yaml: |
kind: StorageClass
@ -79,7 +79,7 @@ metadata:
labels:
app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev
app.kubernetes.io/version: v1.4.4
longhorn-manager: ""
name: backingimagedatasources.longhorn.io
spec:
@ -250,7 +250,7 @@ metadata:
labels:
app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev
app.kubernetes.io/version: v1.4.4
longhorn-manager: ""
name: backingimagemanagers.longhorn.io
spec:
@ -435,7 +435,7 @@ metadata:
labels:
app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev
app.kubernetes.io/version: v1.4.4
longhorn-manager: ""
name: backingimages.longhorn.io
spec:
@ -610,7 +610,7 @@ metadata:
labels:
app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev
app.kubernetes.io/version: v1.4.4
longhorn-manager: ""
name: backups.longhorn.io
spec:
@ -803,7 +803,7 @@ metadata:
labels:
app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev
app.kubernetes.io/version: v1.4.4
longhorn-manager: ""
name: backuptargets.longhorn.io
spec:
@ -986,7 +986,7 @@ metadata:
labels:
app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev
app.kubernetes.io/version: v1.4.4
longhorn-manager: ""
name: backupvolumes.longhorn.io
spec:
@ -1150,7 +1150,7 @@ metadata:
labels:
app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev
app.kubernetes.io/version: v1.4.4
longhorn-manager: ""
name: engineimages.longhorn.io
spec:
@ -1342,7 +1342,7 @@ metadata:
labels:
app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev
app.kubernetes.io/version: v1.4.4
longhorn-manager: ""
name: engines.longhorn.io
spec:
@ -1691,7 +1691,7 @@ metadata:
labels:
app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev
app.kubernetes.io/version: v1.4.4
longhorn-manager: ""
name: instancemanagers.longhorn.io
spec:
@ -1776,7 +1776,7 @@ spec:
description: InstanceManagerSpec defines the desired state of the Longhorn instancer manager
properties:
engineImage:
description: 'TODO: deprecate this field'
description: 'Deprecated: This field is useless.'
type: string
image:
type: string
@ -1861,7 +1861,7 @@ metadata:
labels:
app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev
app.kubernetes.io/version: v1.4.4
longhorn-manager: ""
name: nodes.longhorn.io
spec:
@ -2100,7 +2100,7 @@ metadata:
labels:
app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev
app.kubernetes.io/version: v1.4.4
longhorn-manager: ""
name: orphans.longhorn.io
spec:
@ -2269,7 +2269,7 @@ spec:
jsonPath: .spec.groups
name: Groups
type: string
- description: Should be one of "backup" or "snapshot"
- description: Should be one of "snapshot", "snapshot-force-create", "snapshot-cleanup", "snapshot-delete", "backup" or "backup-force-create"
jsonPath: .spec.task
name: Task
type: string
@ -2331,10 +2331,14 @@ spec:
description: The retain count of the snapshot/backup.
type: integer
task:
description: The recurring job type. Can be "snapshot" or "backup".
description: The recurring job task. Can be "snapshot", "snapshot-force-create", "snapshot-cleanup", "snapshot-delete", "backup" or "backup-force-create".
enum:
- snapshot
- snapshot-force-create
- snapshot-cleanup
- snapshot-delete
- backup
- backup-force-create
type: string
type: object
status:
@ -2366,7 +2370,7 @@ metadata:
labels:
app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev
app.kubernetes.io/version: v1.4.4
longhorn-manager: ""
name: replicas.longhorn.io
spec:
@ -2584,7 +2588,7 @@ metadata:
labels:
app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev
app.kubernetes.io/version: v1.4.4
longhorn-manager: ""
name: settings.longhorn.io
spec:
@ -2675,7 +2679,7 @@ metadata:
labels:
app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev
app.kubernetes.io/version: v1.4.4
longhorn-manager: ""
name: sharemanagers.longhorn.io
spec:
@ -2786,7 +2790,7 @@ metadata:
labels:
app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev
app.kubernetes.io/version: v1.4.4
longhorn-manager: ""
name: snapshots.longhorn.io
spec:
@ -2913,7 +2917,7 @@ metadata:
labels:
app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev
app.kubernetes.io/version: v1.4.4
longhorn-manager: ""
name: supportbundles.longhorn.io
spec:
@ -3039,7 +3043,7 @@ metadata:
labels:
app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev
app.kubernetes.io/version: v1.4.4
longhorn-manager: ""
name: systembackups.longhorn.io
spec:
@ -3162,7 +3166,7 @@ metadata:
labels:
app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev
app.kubernetes.io/version: v1.4.4
longhorn-manager: ""
name: systemrestores.longhorn.io
spec:
@ -3264,7 +3268,7 @@ metadata:
labels:
app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev
app.kubernetes.io/version: v1.4.4
longhorn-manager: ""
name: volumes.longhorn.io
spec:
@ -3438,7 +3442,7 @@ spec:
recurringJobs:
description: Deprecated. Replaced by a separate resource named "RecurringJob"
items:
description: 'VolumeRecurringJobSpec is a deprecated struct. TODO: Should be removed when recurringJobs gets removed from the volume spec.'
description: 'Deprecated: This field is useless and has been replaced by the RecurringJob CRD'
properties:
concurrency:
type: integer
@ -3459,7 +3463,11 @@ spec:
task:
enum:
- snapshot
- snapshot-force-create
- snapshot-cleanup
- snapshot-delete
- backup
- backup-force-create
type: string
type: object
type: array
@ -3616,7 +3624,7 @@ metadata:
labels:
app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev
app.kubernetes.io/version: v1.4.4
rules:
- apiGroups:
- apiextensions.k8s.io
@ -3681,7 +3689,7 @@ metadata:
labels:
app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev
app.kubernetes.io/version: v1.4.4
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
@ -3699,7 +3707,7 @@ metadata:
labels:
app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev
app.kubernetes.io/version: v1.4.4
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
@ -3716,7 +3724,7 @@ metadata:
labels:
app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev
app.kubernetes.io/version: v1.4.4
app: longhorn-manager
name: longhorn-backend
namespace: longhorn-system
@ -3737,7 +3745,7 @@ metadata:
labels:
app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev
app.kubernetes.io/version: v1.4.4
app: longhorn-ui
name: longhorn-frontend
namespace: longhorn-system
@ -3758,7 +3766,7 @@ metadata:
labels:
app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev
app.kubernetes.io/version: v1.4.4
app: longhorn-conversion-webhook
name: longhorn-conversion-webhook
namespace: longhorn-system
@ -3779,7 +3787,7 @@ metadata:
labels:
app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev
app.kubernetes.io/version: v1.4.4
app: longhorn-admission-webhook
name: longhorn-admission-webhook
namespace: longhorn-system
@ -3800,7 +3808,7 @@ metadata:
labels:
app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev
app.kubernetes.io/version: v1.4.4
app: longhorn-recovery-backend
name: longhorn-recovery-backend
namespace: longhorn-system
@ -3821,7 +3829,7 @@ metadata:
labels:
app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev
app.kubernetes.io/version: v1.4.4
name: longhorn-engine-manager
namespace: longhorn-system
spec:
@ -3837,7 +3845,7 @@ metadata:
labels:
app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev
app.kubernetes.io/version: v1.4.4
name: longhorn-replica-manager
namespace: longhorn-system
spec:
@ -3853,7 +3861,7 @@ metadata:
labels:
app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev
app.kubernetes.io/version: v1.4.4
app: longhorn-manager
name: longhorn-manager
namespace: longhorn-system
@ -3866,16 +3874,16 @@ spec:
labels:
app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev
app.kubernetes.io/version: v1.4.4
app: longhorn-manager
spec:
initContainers:
- name: wait-longhorn-admission-webhook
image: longhornio/longhorn-manager:master-head
image: longhornio/longhorn-manager:v1.4.4
command: ['sh', '-c', 'while [ $(curl -m 1 -s -o /dev/null -w "%{http_code}" -k https://longhorn-admission-webhook:9443/v1/healthz) != "200" ]; do echo waiting; sleep 2; done']
containers:
- name: longhorn-manager
image: longhornio/longhorn-manager:master-head
image: longhornio/longhorn-manager:v1.4.4
imagePullPolicy: IfNotPresent
securityContext:
privileged: true
@ -3884,17 +3892,17 @@ spec:
- -d
- daemon
- --engine-image
- "longhornio/longhorn-engine:master-head"
- "longhornio/longhorn-engine:v1.4.4"
- --instance-manager-image
- "longhornio/longhorn-instance-manager:master-head"
- "longhornio/longhorn-instance-manager:v1.4.4"
- --share-manager-image
- "longhornio/longhorn-share-manager:master-head"
- "longhornio/longhorn-share-manager:v1.4.4"
- --backing-image-manager-image
- "longhornio/backing-image-manager:master-head"
- "longhornio/backing-image-manager:v1.4.4"
- --support-bundle-manager-image
- "longhornio/support-bundle-kit:v0.0.16"
- "longhornio/support-bundle-kit:v0.0.27"
- --manager-image
- "longhornio/longhorn-manager:master-head"
- "longhornio/longhorn-manager:v1.4.4"
- --service-account
- longhorn-service-account
ports:
@ -3954,7 +3962,7 @@ metadata:
labels:
app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev
app.kubernetes.io/version: v1.4.4
spec:
replicas: 1
selector:
@ -3965,23 +3973,23 @@ spec:
labels:
app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev
app.kubernetes.io/version: v1.4.4
app: longhorn-driver-deployer
spec:
initContainers:
- name: wait-longhorn-manager
image: longhornio/longhorn-manager:master-head
image: longhornio/longhorn-manager:v1.4.4
command: ['sh', '-c', 'while [ $(curl -m 1 -s -o /dev/null -w "%{http_code}" http://longhorn-backend:9500/v1) != "200" ]; do echo waiting; sleep 2; done']
containers:
- name: longhorn-driver-deployer
image: longhornio/longhorn-manager:master-head
image: longhornio/longhorn-manager:v1.4.4
imagePullPolicy: IfNotPresent
command:
- longhorn-manager
- -d
- deploy-driver
- --manager-image
- "longhornio/longhorn-manager:master-head"
- "longhornio/longhorn-manager:v1.4.4"
- --manager-url
- http://longhorn-backend:9500/v1
env:
@ -4020,7 +4028,7 @@ metadata:
labels:
app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev
app.kubernetes.io/version: v1.4.4
app: longhorn-recovery-backend
name: longhorn-recovery-backend
namespace: longhorn-system
@ -4034,7 +4042,7 @@ spec:
labels:
app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev
app.kubernetes.io/version: v1.4.4
app: longhorn-recovery-backend
spec:
affinity:
@ -4051,7 +4059,7 @@ spec:
topologyKey: kubernetes.io/hostname
containers:
- name: longhorn-recovery-backend
image: longhornio/longhorn-manager:master-head
image: longhornio/longhorn-manager:v1.4.4
imagePullPolicy: IfNotPresent
securityContext:
runAsUser: 2000
@ -4086,12 +4094,12 @@ metadata:
labels:
app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev
app.kubernetes.io/version: v1.4.4
app: longhorn-ui
name: longhorn-ui
namespace: longhorn-system
spec:
replicas: 1
replicas: 2
selector:
matchLabels:
app: longhorn-ui
@ -4100,12 +4108,24 @@ spec:
labels:
app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev
app.kubernetes.io/version: v1.4.4
app: longhorn-ui
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- longhorn-ui
topologyKey: kubernetes.io/hostname
containers:
- name: longhorn-ui
image: longhornio/longhorn-ui:master-head
image: longhornio/longhorn-ui:v1.4.4
imagePullPolicy: IfNotPresent
volumeMounts:
- name : nginx-cache
@ -4137,7 +4157,7 @@ metadata:
labels:
app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev
app.kubernetes.io/version: v1.4.4
app: longhorn-conversion-webhook
name: longhorn-conversion-webhook
namespace: longhorn-system
@ -4151,7 +4171,7 @@ spec:
labels:
app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev
app.kubernetes.io/version: v1.4.4
app: longhorn-conversion-webhook
spec:
affinity:
@ -4168,7 +4188,7 @@ spec:
topologyKey: kubernetes.io/hostname
containers:
- name: longhorn-conversion-webhook
image: longhornio/longhorn-manager:master-head
image: longhornio/longhorn-manager:v1.4.4
imagePullPolicy: IfNotPresent
securityContext:
runAsUser: 2000
@ -4197,7 +4217,7 @@ metadata:
labels:
app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev
app.kubernetes.io/version: v1.4.4
app: longhorn-admission-webhook
name: longhorn-admission-webhook
namespace: longhorn-system
@ -4211,7 +4231,7 @@ spec:
labels:
app.kubernetes.io/name: longhorn
app.kubernetes.io/instance: longhorn
app.kubernetes.io/version: v1.4.0-dev
app.kubernetes.io/version: v1.4.4
app: longhorn-admission-webhook
spec:
affinity:
@ -4228,14 +4248,14 @@ spec:
topologyKey: kubernetes.io/hostname
initContainers:
- name: wait-longhorn-conversion-webhook
image: longhornio/longhorn-manager:master-head
image: longhornio/longhorn-manager:v1.4.4
command: ['sh', '-c', 'while [ $(curl -m 1 -s -o /dev/null -w "%{http_code}" -k https://longhorn-conversion-webhook:9443/v1/healthz) != "200" ]; do echo waiting; sleep 2; done']
imagePullPolicy: IfNotPresent
securityContext:
runAsUser: 2000
containers:
- name: longhorn-admission-webhook
image: longhornio/longhorn-manager:master-head
image: longhornio/longhorn-manager:v1.4.4
imagePullPolicy: IfNotPresent
securityContext:
runAsUser: 2000
@ -4260,3 +4280,6 @@ spec:
fieldRef:
fieldPath: spec.nodeName
serviceAccountName: longhorn-service-account
---
# Source: longhorn/templates/validate-psp-install.yaml
#

View File

@ -26,11 +26,11 @@ spec:
- bash
- -c
- *cmd
image: alpine:3.12
image: alpine:3.17
securityContext:
privileged: true
containers:
- name: sleep
image: k8s.gcr.io/pause:3.1
image: registry.k8s.io/pause:3.1
updateStrategy:
type: RollingUpdate

View File

@ -31,6 +31,6 @@ spec:
privileged: true
containers:
- name: sleep
image: k8s.gcr.io/pause:3.1
image: registry.k8s.io/pause:3.1
updateStrategy:
type: RollingUpdate

View File

@ -106,14 +106,14 @@ The life cycle of a snapshot CR is as below:
1. **Create**
1. When a snapshot CR is created, Longhorn mutation webhook will:
1. Add a volume label `longhornvolume: <VOLUME-NAME>` to the snapshot CR. This allow us to efficiently find snapshots corresponding to a volume without having listing potientially thoundsands of snapshots.
1. Add a volume label `longhornvolume: <VOLUME-NAME>` to the snapshot CR. This allow us to efficiently find snapshots corresponding to a volume without having listing potentially thousands of snapshots.
1. Add `longhornFinalizerKey` to snapshot CR to prevent it from being removed before Longhorn has change to clean up the corresponding snapshot
1. Populate the value for `snapshot.OwnerReferences` to uniquely identify the volume of this snapshot. This field contains the volume UID to uniquely identify the volume in case the old volume was deleted and a new volume was created with the same name.
2. For user created snapshot CR, the field `Spec.CreateSnapshot` should be set to `true` indicating that Longhorn should provision a new snapshot for this CR.
1. Longhorn snapshot controller will pick up this CR, check to see if there already is a snapshot inside the `engine.Status.Snapshots`.
1. If there is there already a snapshot inside engine.Status.Snapshots, update the snapshot.Status with the snapshot info inside `engine.Status.Snapshots`
1. If there is already a snapshot inside engine.Status.Snapshots, update the snapshot.Status with the snapshot info inside `engine.Status.Snapshots`
2. If there isn't a snapshot inside `engine.Status.Snapshots` then:
1. making a call to engine process to check if there already a snapshot with the same name. This is to make sure we don't accidentally create 2 snapshots with the same name. This logic can be remove after [the issue](https://github.com/longhorn/longhorn/issues/3844) is resolved
1. making a call to engine process to check if there already a snapshot with the same name. This is to make sure we don't accidentally create 2 snapshots with the same name. This logic can be removed after [the issue](https://github.com/longhorn/longhorn/issues/3844) is resolved
1. If the snapshot doesn't inside the engine process, make another call to create the snapshot
3. For the snapshots that are already exist inside `engine.Status.Snapshots` but doesn't have corresponding snapshot CRs (i.e., system generated snapshots), the engine monitoring will generate snapshot CRs for them. The snapshot CR generated by engine monitoring with have `Spec.CreateSnapshot` set to `false`, Longhorn snapshot controller will not create a snapshot for those CRs. The snapshot controller only sync status for those snapshot CRs
2. **Update**

View File

@ -51,7 +51,7 @@ https://github.com/longhorn/longhorn/issues/3546
- Introduce a new gRPC server in Instance Manager.
- Keep re-usable connections between Manager and Instance Managers.
- Keep reusable connections between Manager and Instance Managers.
- Allow Manager to fall back to engine binary call when communicating with old Instance Manager.

View File

@ -68,7 +68,7 @@ While the node where the share-manager pod is running is down, the share-manager
│ │
HTTP API ┌─────────────┴──────────────┐
│ │ │
│ │ endpint 1 │ endpoint N
│ │ endpoint 1 │ endpoint N
┌──────────────────────┐ │ ┌─────────▼────────┐ ┌────────▼─────────┐
│ share-manager pod │ │ │ recovery-backend │ │ recovery-backend │
│ │ │ │ pod │ │ pod │

View File

@ -19,7 +19,7 @@ spec:
image: ubuntu:xenial
tty: true
command: [ "/bin/sh" ]
args: [ "-c", "cp -r -v /mnt/old/* /mnt/new" ]
args: [ "-c", "cp -r -v /mnt/old/. /mnt/new" ]
volumeMounts:
- name: old-vol
mountPath: /mnt/old

View File

@ -20,6 +20,3 @@ spec:
- podSelector:
matchLabels:
longhorn.io/component: backing-image-manager
- podSelector:
matchLabels:
longhorn.io/component: backing-image-data-source

View File

@ -17,9 +17,6 @@ spec:
- podSelector:
matchLabels:
longhorn.io/component: instance-manager
- podSelector:
matchLabels:
longhorn.io/component: backing-image-manager
- podSelector:
matchLabels:
longhorn.io/component: backing-image-data-source

View File

@ -32,7 +32,7 @@ spec:
terminationGracePeriodSeconds: 10
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
image: registry.k8s.io/nginx-slim:0.8
livenessProbe:
exec:
command:

View File

@ -21,3 +21,4 @@ parameters:
# nodeSelector: "storage,fast"
# recurringJobSelector: '[{"name":"snap-group", "isGroup":true},
# {"name":"backup", "isGroup":false}]'
# nfsOptions: "soft,timeo=150,retrans=3"

View File

@ -105,13 +105,21 @@ set_packages_and_check_cmd()
esac
}
detect_node_kernel_release()
{
local pod="$1"
KERNEL_RELEASE=$(kubectl exec $pod -- nsenter --mount=/proc/1/ns/mnt -- bash -c 'uname -r')
echo "$KERNEL_RELEASE"
}
detect_node_os()
{
local pod="$1"
OS=`kubectl exec -i $pod -- nsenter --mount=/proc/1/ns/mnt -- bash -c 'grep -E "^ID_LIKE=" /etc/os-release | cut -d= -f2'`
OS=$(kubectl exec $pod -- nsenter --mount=/proc/1/ns/mnt -- bash -c 'grep -E "^ID_LIKE=" /etc/os-release | cut -d= -f2')
if [[ -z "${OS}" ]]; then
OS=`kubectl exec -i $pod -- nsenter --mount=/proc/1/ns/mnt -- bash -c 'grep -E "^ID=" /etc/os-release | cut -d= -f2'`
OS=$(kubectl exec $pod -- nsenter --mount=/proc/1/ns/mnt -- bash -c 'grep -E "^ID=" /etc/os-release | cut -d= -f2')
fi
echo "$OS"
}
@ -229,7 +237,7 @@ check_package_installed() {
local all_found=true
for pod in ${pods}; do
OS=`detect_node_os $pod`
OS=$(detect_node_os $pod)
if [ x"$OS" == x"" ]; then
error "Unable to detect OS on node $node."
exit 2
@ -240,10 +248,10 @@ check_package_installed() {
for ((i=0; i<${#PACKAGES[@]}; i++)); do
local package=${PACKAGES[$i]}
kubectl exec -i $pod -- nsenter --mount=/proc/1/ns/mnt -- timeout 30 bash -c "$CHECK_CMD $package" > /dev/null 2>&1
kubectl exec $pod -- nsenter --mount=/proc/1/ns/mnt -- timeout 30 bash -c "$CHECK_CMD $package" > /dev/null 2>&1
if [ $? != 0 ]; then
all_found=false
node=`kubectl get ${pod} --no-headers -o=custom-columns=:.spec.nodeName`
node=$(kubectl get ${pod} --no-headers -o=custom-columns=:.spec.nodeName)
error "$package is not found in $node."
fi
done
@ -280,10 +288,10 @@ check_multipathd() {
local all_not_found=true
for pod in ${pods}; do
kubectl exec -t $pod -- nsenter --mount=/proc/1/ns/mnt -- bash -c "systemctl status --no-pager multipathd.service" > /dev/null 2>&1
kubectl exec $pod -- nsenter --mount=/proc/1/ns/mnt -- bash -c "systemctl status --no-pager multipathd.service" > /dev/null 2>&1
if [ $? = 0 ]; then
all_not_found=false
node=`kubectl get ${pod} --no-headers -o=custom-columns=:.spec.nodeName`
node=$(kubectl get ${pod} --no-headers -o=custom-columns=:.spec.nodeName)
warn "multipathd is running on $node."
fi
done
@ -293,16 +301,38 @@ check_multipathd() {
fi
}
verlte() {
printf '%s\n' "$1" "$2" | sort -C -V
}
verlt() {
! verlte "$2" "$1"
}
check_kernel_release() {
local pods=$(kubectl get pods -o name -l app=longhorn-environment-check)
recommended_kernel_release="5.8"
for pod in ${pods}; do
local kernel=$(detect_node_kernel_release ${pod})
if verlt "$kernel" "$recommended_kernel_release" ; then
local node=$(kubectl get ${pod} --no-headers -o=custom-columns=:.spec.nodeName)
warn "Node $node has outdated kernel release: $kernel. Recommending kernel release >= $recommended_kernel_release"
fi
done
}
check_iscsid() {
local pods=$(kubectl get pods -o name -l app=longhorn-environment-check)
local all_found=true
for pod in ${pods}; do
kubectl exec -t $pod -- nsenter --mount=/proc/1/ns/mnt -- bash -c "systemctl status --no-pager iscsid.service" > /dev/null 2>&1
kubectl exec $pod -- nsenter --mount=/proc/1/ns/mnt -- bash -c "systemctl status --no-pager iscsid.service" > /dev/null 2>&1
if [ $? != 0 ]; then
all_found=false
node=`kubectl get ${pod} --no-headers -o=custom-columns=:.spec.nodeName`
node=$(kubectl get ${pod} --no-headers -o=custom-columns=:.spec.nodeName)
error "iscsid is not running on $node."
fi
done
@ -312,10 +342,49 @@ check_iscsid() {
fi
}
check_nfs_client_kernel_support() {
local pods=$(kubectl get pods -o name -l app=longhorn-environment-check)
local all_found=true
local nfs_client_kernel_configs=("CONFIG_NFS_V4_1" "CONFIG_NFS_V4_2")
for config in "${nfs_client_kernel_configs[@]}"; do
declare -A nodes=()
for pod in ${pods}; do
local kernel_release=$(detect_node_kernel_release $pod)
if [ x"$kernel_release" == x"" ]; then
error "Unable to detect kernel release on node $node."
exit 2
fi
node=$(kubectl get ${pod} --no-headers -o=custom-columns=:.spec.nodeName)
res=$(kubectl exec $pod -- nsenter --mount=/proc/1/ns/mnt -- bash -c "grep -E \"^# ${config} is not set\" /boot/config-${kernel_release}" > /dev/null 2>&1)
if [[ $? == 0 ]]; then
all_found=false
nodes["${node}"]="${node}"
else
res=$(kubectl exec $pod -- nsenter --mount=/proc/1/ns/mnt -- bash -c "grep -E \"^${config}=\" /boot/config-${kernel_release}" > /dev/null 2>&1)
if [[ $? != 0 ]]; then
all_found=false
warn "Unable to check kernel config ${config} on node ${node}"
fi
fi
done
if [ ${#nodes[@]} != 0 ]; then
warn ""${config}" kernel config is not enabled on nodes ${nodes[*]}."
fi
done
if [[ ${all_found} == false ]]; then
warn "NFS client kernel support, ${nfs_client_kernel_configs[*]}, is not enabled on Longhorn nodes. Please refer to https://longhorn.io/docs/1.4.0/deploy/install/#installing-nfsv4-client for more information."
fi
}
######################################################
# Main logics
######################################################
DEPENDENCIES=("kubectl" "jq" "mktemp")
DEPENDENCIES=("kubectl" "jq" "mktemp" "sort" "printf")
check_local_dependencies "${DEPENDENCIES[@]}"
# Check the each host has a unique hostname (for RWX volume)
@ -328,9 +397,11 @@ trap cleanup EXIT
create_ds
wait_ds_ready
check_nfs_client_kernel_support
check_package_installed
check_iscsid
check_multipathd
check_mount_propagation
check_kernel_release
exit 0

View File

@ -15,6 +15,11 @@ while [[ $# -gt 0 ]]; do
shift # past argument
shift # past value
;;
-p|--platform)
platform="$2"
shift # past argument
shift # past value
;;
-h|--help)
help="true"
shift
@ -28,8 +33,9 @@ while [[ $# -gt 0 ]]; do
done
usage () {
echo "USAGE: $0 [--image-list longhorn-images.txt] [--images longhorn-images.tar.gz]"
echo "USAGE: $0 [--image-list longhorn-images.txt] [--images longhorn-images.tar.gz] [--platform linux/amd64]"
echo " [-l|--images-list path] text file with list of images. 1 per line."
echo " [-p|--platform linux/arch] if using images-list path, pulls the image with the specified platform"
echo " [-i|--images path] tar.gz generated by docker save. If this flag is empty, the script does not export images to a tar.gz file"
echo " [-h|--help] Usage message"
}
@ -42,7 +48,11 @@ fi
set -e -x
for i in $(cat ${list}); do
if [ -n "$platform" ]; then
docker pull ${i} --platform $platform
else
docker pull ${i}
fi
done
if [[ $images ]]; then

View File

@ -66,7 +66,7 @@ rules:
- apiGroups: ["longhorn.io"]
resources: ["volumes", "engines", "replicas", "settings", "engineimages", "nodes", "instancemanagers", "sharemanagers",
"backingimages", "backingimagemanagers", "backingimagedatasources", "backuptargets", "backupvolumes", "backups",
"recurringjobs", "orphans", "snapshots"]
"recurringjobs", "orphans", "snapshots", "supportbundles", "systembackups", "systemrestores"]
verbs: ["*"]
- apiGroups: ["coordination.k8s.io"]
resources: ["leases"]
@ -106,7 +106,7 @@ spec:
spec:
containers:
- name: longhorn-uninstall
image: longhornio/longhorn-manager:master-head
image: longhornio/longhorn-manager:v1.4.4
imagePullPolicy: IfNotPresent
securityContext:
privileged: true