From af0be15b8934d145de60c08685e24af6b97b287d Mon Sep 17 00:00:00 2001 From: Shuo Wu Date: Thu, 18 Apr 2019 01:56:07 +0000 Subject: [PATCH 01/22] Update README.md for uninstall modification --- README.md | 18 +++++++++++++----- 1 file changed, 13 insertions(+), 5 deletions(-) diff --git a/README.md b/README.md index 82c2bfb..55f2236 100644 --- a/README.md +++ b/README.md @@ -193,24 +193,32 @@ See [here](./docs/troubleshooting.md) for the troubleshooting guide. 2. Create the uninstallation job to clean up CRDs from the system and wait for success: ``` kubectl create -f https://raw.githubusercontent.com/rancher/longhorn/master/uninstall/uninstall.yaml - kubectl -n longhorn-system get job/longhorn-uninstall -w + kubectl get job/longhorn-uninstall -w ``` Example output: ``` $ kubectl create -f https://raw.githubusercontent.com/rancher/longhorn/master/uninstall/uninstall.yaml +serviceaccount/longhorn-uninstall-service-account created +clusterrole.rbac.authorization.k8s.io/longhorn-uninstall-role created +clusterrolebinding.rbac.authorization.k8s.io/longhorn-uninstall-bind created job.batch/longhorn-uninstall created -$ kubectl -n longhorn-system get job/longhorn-uninstall -w -NAME DESIRED SUCCESSFUL AGE -longhorn-uninstall 1 0 3s -longhorn-uninstall 1 1 45s + +$ kubectl get job/longhorn-uninstall -w +NAME COMPLETIONS DURATION AGE +longhorn-uninstall 0/1 3s 3s +longhorn-uninstall 1/1 20s 20s ^C ``` 3. Remove remaining components: ``` kubectl delete -f https://raw.githubusercontent.com/rancher/longhorn/master/deploy/longhorn.yaml + kubectl delete -f https://raw.githubusercontent.com/rancher/longhorn/master/uninstall/uninstall.yaml ``` + +Tip: If you try `kubectl delete -f https://raw.githubusercontent.com/rancher/longhorn/master/deploy/longhorn.yaml` first and get stuck there, +pressing `Ctrl C` then running `kubectl create -f https://raw.githubusercontent.com/rancher/longhorn/master/uninstall/uninstall.yaml` can also help you remove Longhorn. Finally, don't forget to cleanup remaining components. ## License From 8ef57976c303d36742f1925f988d8ec86dd58b4c Mon Sep 17 00:00:00 2001 From: Shuo Wu Date: Fri, 19 Apr 2019 21:48:29 +0000 Subject: [PATCH 02/22] Added 'Setup recurring snapshot/backup' section in snapshot-backup.md --- docs/snapshot-backup.md | 36 ++++++++++++++++++++++++++++++++++++ 1 file changed, 36 insertions(+) diff --git a/docs/snapshot-backup.md b/docs/snapshot-backup.md index de6f958..07382b8 100644 --- a/docs/snapshot-backup.md +++ b/docs/snapshot-backup.md @@ -129,3 +129,39 @@ nfs://longhorn-test-nfs-svc.default:/opt/backupstore ``` You can find an example NFS backupstore for testing purpose [here](https://github.com/rancher/longhorn/blob/master/deploy/backupstores/nfs-backupstore.yaml). + + +## Setup recurring snapshot/backup + +Longhorn volume supports recurring jobs for automatic backup and snapshot creation. + +### Set up recurring jobs for StorageClass + +Users can set field `recurringJobs` in StorageClass. Any volume created using this StorageClass will have those recurring jobs automatically set up. + +Field `recurringJobs` should follow JSON format. e.g. + +``` +kind: StorageClass +apiVersion: storage.k8s.io/v1 +metadata: + name: longhorn +provisioner: rancher.io/longhorn +parameters: + numberOfReplicas: "3" + staleReplicaTimeout: "30" + fromBackup: "" + recurringJobs: '[{"name":"snap", "task":"snapshot", "cron":"*/1 * * * *", "retain":1}, + {"name":"backup", "task":"backup", "cron":"*/2 * * * *", "retain":1}]' + +``` + +Explanation: + +1. `name`: Name of one job. Do not use duplicate name in one `recurringJobs`. And the length of `name` should be no more than 8 characters. + +2. `task`: Type of one job. It supports `snapshot` (periodically create snapshot) or `backup` (periodically create snapshot then do backup) only. + +3. `cron`: Cron expression. It tells execution time of one job. + +4. `retain`: How many snapshots/backups Longhorn will retain for one job. It should be no less than 1. \ No newline at end of file From bfd0dec3021879bac9163f0114b44a1154b5ca86 Mon Sep 17 00:00:00 2001 From: Sheng Yang Date: Fri, 19 Apr 2019 15:25:15 -0700 Subject: [PATCH 03/22] Update snapshot-backup.md --- docs/snapshot-backup.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/docs/snapshot-backup.md b/docs/snapshot-backup.md index 07382b8..de244b7 100644 --- a/docs/snapshot-backup.md +++ b/docs/snapshot-backup.md @@ -135,6 +135,8 @@ You can find an example NFS backupstore for testing purpose [here](https://githu Longhorn volume supports recurring jobs for automatic backup and snapshot creation. +Users can setup recurring snapshot/backup via Longhorn UI, or Kubernetes StorageClass. + ### Set up recurring jobs for StorageClass Users can set field `recurringJobs` in StorageClass. Any volume created using this StorageClass will have those recurring jobs automatically set up. @@ -164,4 +166,4 @@ Explanation: 3. `cron`: Cron expression. It tells execution time of one job. -4. `retain`: How many snapshots/backups Longhorn will retain for one job. It should be no less than 1. \ No newline at end of file +4. `retain`: How many snapshots/backups Longhorn will retain for one job. It should be no less than 1. From e63af539f45ac73ab3f37556e19e59dfa252163d Mon Sep 17 00:00:00 2001 From: Sheng Yang Date: Fri, 19 Apr 2019 15:31:16 -0700 Subject: [PATCH 04/22] Update snapshot-backup.md --- docs/snapshot-backup.md | 17 +++++++++-------- 1 file changed, 9 insertions(+), 8 deletions(-) diff --git a/docs/snapshot-backup.md b/docs/snapshot-backup.md index de244b7..1f3b63a 100644 --- a/docs/snapshot-backup.md +++ b/docs/snapshot-backup.md @@ -23,10 +23,7 @@ A backup target represents a backupstore in Longhorn. The backup target can be s See [here](#set-backuptarget) for details on how to setup backup target. -### Recurring snapshot and backup -Longhorn supports recurring snapshot and backup for volumes. User only need to set when he/she wish to take the snapshot and/or backup, and how many snapshots/backups needs to be retains, then Longhorn will automatically create snapshot/backup for the user at that time, as long as the volume is attached to a node. - -User can find the setting for the recurring snapshot and backup in the `Volume Detail` page. +Longhorn also supports setting up recurring snapshot/backup jobs for volumes, via Longhorn UI or Kubernetes Storage Class. See [here](#setup-recurring-snapshotbackup) for details. ## Set BackupTarget @@ -131,15 +128,19 @@ nfs://longhorn-test-nfs-svc.default:/opt/backupstore You can find an example NFS backupstore for testing purpose [here](https://github.com/rancher/longhorn/blob/master/deploy/backupstores/nfs-backupstore.yaml). -## Setup recurring snapshot/backup +# Setup recurring snapshot/backup -Longhorn volume supports recurring jobs for automatic backup and snapshot creation. +Longhorn supports recurring snapshot and backup for volumes. User only need to set when he/she wish to take the snapshot and/or backup, and how many snapshots/backups needs to be retains, then Longhorn will automatically create snapshot/backup for the user at that time, as long as the volume is attached to a node. Users can setup recurring snapshot/backup via Longhorn UI, or Kubernetes StorageClass. -### Set up recurring jobs for StorageClass +## Set up recurring jobs using Longhorn UI -Users can set field `recurringJobs` in StorageClass. Any volume created using this StorageClass will have those recurring jobs automatically set up. +User can find the setting for the recurring snapshot and backup in the `Volume Detail` page. + +## Set up recurring jobs using StorageClass + +Users can set field `recurringJobs` in StorageClass as parameters. Any future volumes created using this StorageClass will have those recurring jobs automatically set up. Field `recurringJobs` should follow JSON format. e.g. From cac53ad3ba980a6136c6e919a6ba83dd7d17952d Mon Sep 17 00:00:00 2001 From: Shuo Wu Date: Wed, 8 May 2019 10:21:58 -0700 Subject: [PATCH 05/22] chart: added longhorn v0.4.1 chart --- chart/.helmignore | 21 +++++ chart/Chart.yaml | 24 ++++++ chart/README.md | 57 +++++++++++++ chart/app-readme.md | 7 ++ chart/questions.yml | 82 ++++++++++++++++++ chart/templates/NOTES.txt | 2 + chart/templates/_helpers.tpl | 22 +++++ chart/templates/clusterrole.yaml | 32 +++++++ chart/templates/clusterrolebinding.yaml | 12 +++ chart/templates/crds.yaml | 107 ++++++++++++++++++++++++ chart/templates/daemonset-sa.yaml | 94 +++++++++++++++++++++ chart/templates/deployment-driver.yaml | 61 ++++++++++++++ chart/templates/deployment-ui.yaml | 52 ++++++++++++ chart/templates/ingress.yaml | 30 +++++++ chart/templates/postupgrade-job.yaml | 31 +++++++ chart/templates/serviceaccount.yaml | 5 ++ chart/templates/storageclass.yaml | 17 ++++ chart/templates/tls-secrets.yaml | 15 ++++ chart/templates/uninstall-job.yaml | 30 +++++++ chart/values.yaml | 87 +++++++++++++++++++ 20 files changed, 788 insertions(+) create mode 100644 chart/.helmignore create mode 100644 chart/Chart.yaml create mode 100644 chart/README.md create mode 100644 chart/app-readme.md create mode 100644 chart/questions.yml create mode 100644 chart/templates/NOTES.txt create mode 100644 chart/templates/_helpers.tpl create mode 100644 chart/templates/clusterrole.yaml create mode 100644 chart/templates/clusterrolebinding.yaml create mode 100644 chart/templates/crds.yaml create mode 100644 chart/templates/daemonset-sa.yaml create mode 100644 chart/templates/deployment-driver.yaml create mode 100644 chart/templates/deployment-ui.yaml create mode 100644 chart/templates/ingress.yaml create mode 100644 chart/templates/postupgrade-job.yaml create mode 100644 chart/templates/serviceaccount.yaml create mode 100644 chart/templates/storageclass.yaml create mode 100644 chart/templates/tls-secrets.yaml create mode 100644 chart/templates/uninstall-job.yaml create mode 100644 chart/values.yaml diff --git a/chart/.helmignore b/chart/.helmignore new file mode 100644 index 0000000..f0c1319 --- /dev/null +++ b/chart/.helmignore @@ -0,0 +1,21 @@ +# Patterns to ignore when building packages. +# This supports shell glob matching, relative path matching, and +# negation (prefixed with !). Only one pattern per line. +.DS_Store +# Common VCS dirs +.git/ +.gitignore +.bzr/ +.bzrignore +.hg/ +.hgignore +.svn/ +# Common backup files +*.swp +*.bak +*.tmp +*~ +# Various IDEs +.project +.idea/ +*.tmproj diff --git a/chart/Chart.yaml b/chart/Chart.yaml new file mode 100644 index 0000000..8871258 --- /dev/null +++ b/chart/Chart.yaml @@ -0,0 +1,24 @@ +apiVersion: v1 +name: longhorn +version: 0.4.1 +appVersion: v0.4.1 +kubeVersion: ">=v1.8.0-r0" +description: Longhorn is a distributed block storage system for Kubernetes powered by Rancher Labs. +keywords: +- longhorn +- storage +- distributed +- block +- device +- iscsi +home: https://github.com/rancher/longhorn +sources: +- https://github.com/rancher/longhorn +- https://github.com/rancher/longhorn-engine +- https://github.com/rancher/longhorn-manager +- https://github.com/rancher/longhorn-ui +- https://github.com/rancher/longhorn-tests +maintainers: +- name: rancher + email: charts@rancher.com +icon: https://s3.us-east-2.amazonaws.com/longhorn-assets/longhorn-logo.svg diff --git a/chart/README.md b/chart/README.md new file mode 100644 index 0000000..26af79b --- /dev/null +++ b/chart/README.md @@ -0,0 +1,57 @@ +# Rancher Longhorn Chart + +The following document pertains to running Longhorn from the Rancher 2.0 chart. + +## Source Code + +Longhorn is 100% open source software. Project source code is spread across a number of repos: + +1. Longhorn Engine -- Core controller/replica logic https://github.com/rancher/longhorn-engine +2. Longhorn Manager -- Longhorn orchestration, includes Flexvolume driver for Kubernetes https://github.com/rancher/longhorn-manager +3. Longhorn UI -- Dashboard https://github.com/rancher/longhorn-ui + +## Prerequisites + +1. Rancher v2.1+ +2. Docker v1.13+ +3. Kubernetes v1.8+ cluster with 1 or more nodes and Mount Propagation feature enabled. If your Kubernetes cluster was provisioned by Rancher v2.0.7+ or later, MountPropagation feature is enabled by default. [Check your Kubernetes environment now](https://github.com/rancher/longhorn#environment-check-script). If MountPropagation is disabled, the Kubernetes Flexvolume driver will be deployed instead of the default CSI driver. Base Image feature will also be disabled if MountPropagation is disabled. +4. Make sure `curl`, `findmnt`, `grep`, `awk` and `blkid` has been installed in all nodes of the Kubernetes cluster. +5. Make sure `open-iscsi` has been installed in all nodes of the Kubernetes cluster. For GKE, recommended Ubuntu as guest OS image since it contains `open-iscsi` already. + +## Uninstallation + +1. To prevent damage to the Kubernetes cluster, we recommend deleting all Kubernetes workloads using Longhorn volumes (PersistentVolume, PersistentVolumeClaim, StorageClass, Deployment, StatefulSet, DaemonSet, etc). + +2. From Rancher UI, navigate to `Catalog Apps` tab and delete Longhorn app. + +## Troubleshooting + +### I deleted the Longhorn App from Rancher UI instead of following the uninstallation procedure + +Redeploy the (same version) Longhorn App. Follow the uninstallation procedure above. + +### Problems with CRDs + +If your CRD instances or the CRDs themselves can't be deleted for whatever reason, run the commands below to clean up. Caution: this will wipe all Longhorn state! + +``` +# Delete CRD finalizers, instances and definitions +for crd in $(kubectl get crd -o jsonpath={.items[*].metadata.name} | tr ' ' '\n' | grep longhorn.rancher.io); do + kubectl -n ${NAMESPACE} get $crd -o yaml | sed "s/\- longhorn.rancher.io//g" | kubectl apply -f - + kubectl -n ${NAMESPACE} delete $crd --all + kubectl delete crd/$crd +done +``` + +### Volume can be attached/detached from UI, but Kubernetes Pod/StatefulSet etc cannot use it + +Check if volume plugin directory has been set correctly. This is automatically detected unless user explicitly set it. + +By default, Kubernetes uses `/usr/libexec/kubernetes/kubelet-plugins/volume/exec/`, as stated in the [official document](https://github.com/kubernetes/community/blob/master/contributors/devel/flexvolume.md#prerequisites). + +Some vendors choose to change the directory for various reasons. For example, GKE uses `/home/kubernetes/flexvolume` instead. + +User can find the correct directory by running `ps aux|grep kubelet` on the host and check the `--volume-plugin-dir` parameter. If there is none, the default `/usr/libexec/kubernetes/kubelet-plugins/volume/exec/` will be used. + +--- +Please see [link](https://github.com/rancher/longhorn) for more information. diff --git a/chart/app-readme.md b/chart/app-readme.md new file mode 100644 index 0000000..9094764 --- /dev/null +++ b/chart/app-readme.md @@ -0,0 +1,7 @@ +# Longhorn + +Longhorn is a lightweight, reliable and easy to use distributed block storage system for Kubernetes. Once deployed, users can leverage persistent volumes provided by Longhorn. + +Longhorn creates a dedicated storage controller for each volume and synchronously replicates the volume across multiple replicas stored on multiple nodes. The storage controller and replicas are themselves orchestrated using Kubernetes. Longhorn supports snapshots, backups and even allows you to schedule recurring snapshots and backups! + +[Chart Documentation](https://github.com/rancher/longhorn/blob/master/docs/chart.md) diff --git a/chart/questions.yml b/chart/questions.yml new file mode 100644 index 0000000..4f97f1c --- /dev/null +++ b/chart/questions.yml @@ -0,0 +1,82 @@ +categories: +- storage +labels: + io.rancher.certified: experimental +namespace: longhorn-system +questions: +- variable: driver + default: csi + description: "Deploy either the CSI or FlexVolume driver. CSI is newer but requires MountPropagation, a feature enabled by default in Kubernetes v1.10 and later" + type: enum + options: + - csi + - flexvolume + label: Longhorn Kubernetes Driver + group: "Longhorn Settings" + show_subquestion_if: flexvolume + subquestions: + - variable: persistence.flexvolumePath + default: "" + description: "Leave blank to autodetect. For RKE, use `/var/lib/kubelet/volumeplugins`. For GKE, use `/home/kubernetes/flexvolume/` instead. Users can find the correct directory by running `ps aux|grep kubelet` on the host and check the --volume-plugin-dir parameter. If there is none, the default `/usr/libexec/kubernetes/kubelet-plugins/volume/exec/` will be used." + type: string + label: Longhorn Flexvolume Path + show_subquestion_if: csi + subquestions: + - variable: csi.attacherImage + default: + description: "Specify CSI attacher image. Leave blank to autodetect." + type: string + label: Longhorn CSI Attacher Image + - variable: csi.provisionerImage + default: + description: "Specify CSI provisioner image. Leave blank to autodetect." + type: string + label: Longhorn CSI Provisioner Image + - variable: csi.driverRegistrarImage + default: + description: "Specify CSI Driver Registrar image. Leave blank to autodetect." + type: string + label: Longhorn CSI Driver Registrar Image +- variable: persistence.defaultClass + default: "true" + description: "Set as default StorageClass" + group: "Longhorn Settings" + type: boolean + required: true + label: Default Storage Class +- variable: ingress.enabled + default: "false" + description: "Expose app using Layer 7 Load Balancer - ingress" + type: boolean + group: "Services and Load Balancing" + label: Expose app using Layer 7 Load Balancer + show_subquestion_if: true + subquestions: + - variable: ingress.host + default: "xip.io" + description: "layer 7 Load Balancer hostname" + type: hostname + required: true + label: Layer 7 Load Balancer Hostname +- variable: service.ui.type + default: "Rancher-Proxy" + description: "Define Longhorn UI service type" + type: enum + options: + - "ClusterIP" + - "NodePort" + - "LoadBalancer" + - "Rancher-Proxy" + label: Longhorn UI Service + show_if: "ingress.enabled=false" + group: "Services and Load Balancing" + show_subquestion_if: "NodePort" + subquestions: + - variable: service.ui.nodePort + default: "" + description: "NodePort port number(to set explicitly, choose port between 30000-32767)" + type: int + min: 30000 + max: 32767 + show_if: "service.ui.type=NodePort||service.ui.type=LoadBalancer" + label: UI Service NodePort number diff --git a/chart/templates/NOTES.txt b/chart/templates/NOTES.txt new file mode 100644 index 0000000..89af514 --- /dev/null +++ b/chart/templates/NOTES.txt @@ -0,0 +1,2 @@ +1. Get the application URL by running these commands: + kubectl get po -n $release_namespace diff --git a/chart/templates/_helpers.tpl b/chart/templates/_helpers.tpl new file mode 100644 index 0000000..88d0f45 --- /dev/null +++ b/chart/templates/_helpers.tpl @@ -0,0 +1,22 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Expand the name of the chart. +*/}} +{{- define "longhorn.name" -}} +{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} +{{- end -}} + +{{/* +Create a default fully qualified app name. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +*/}} +{{- define "longhorn.fullname" -}} +{{- $name := default .Chart.Name .Values.nameOverride -}} +{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}} +{{- end -}} + + +{{- define "longhorn.managerIP" -}} +{{- $fullname := (include "longhorn.fullname" .) -}} +{{- printf "http://%s-backend:9500" $fullname | trunc 63 | trimSuffix "-" -}} +{{- end -}} diff --git a/chart/templates/clusterrole.yaml b/chart/templates/clusterrole.yaml new file mode 100644 index 0000000..e8244c7 --- /dev/null +++ b/chart/templates/clusterrole.yaml @@ -0,0 +1,32 @@ +apiVersion: rbac.authorization.k8s.io/v1beta1 +kind: ClusterRole +metadata: + name: longhorn-role +rules: +- apiGroups: + - apiextensions.k8s.io + resources: + - customresourcedefinitions + verbs: + - "*" +- apiGroups: [""] + resources: ["pods", "events", "persistentvolumes", "persistentvolumeclaims", "nodes", "proxy/nodes", "pods/log", "secrets", "services", "endpoints", "configmaps"] + verbs: ["*"] +- apiGroups: [""] + resources: ["namespaces"] + verbs: ["get", "list"] +- apiGroups: ["apps"] + resources: ["daemonsets", "statefulsets", "deployments"] + verbs: ["*"] +- apiGroups: ["batch"] + resources: ["jobs", "cronjobs"] + verbs: ["*"] +- apiGroups: ["storage.k8s.io"] + resources: ["storageclasses", "volumeattachments"] + verbs: ["*"] +- apiGroups: ["csi.storage.k8s.io"] + resources: ["csinodeinfos"] + verbs: ["get", "list", "watch"] +- apiGroups: ["longhorn.rancher.io"] + resources: ["volumes", "engines", "replicas", "settings", "engineimages", "nodes"] + verbs: ["*"] diff --git a/chart/templates/clusterrolebinding.yaml b/chart/templates/clusterrolebinding.yaml new file mode 100644 index 0000000..12bcd53 --- /dev/null +++ b/chart/templates/clusterrolebinding.yaml @@ -0,0 +1,12 @@ +apiVersion: rbac.authorization.k8s.io/v1beta1 +kind: ClusterRoleBinding +metadata: + name: longhorn-bind +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: longhorn-role +subjects: +- kind: ServiceAccount + name: longhorn-service-account + namespace: {{ .Release.Namespace }} diff --git a/chart/templates/crds.yaml b/chart/templates/crds.yaml new file mode 100644 index 0000000..ef45211 --- /dev/null +++ b/chart/templates/crds.yaml @@ -0,0 +1,107 @@ +apiVersion: apiextensions.k8s.io/v1beta1 +kind: CustomResourceDefinition +metadata: + labels: + longhorn-manager: Engine + name: engines.longhorn.rancher.io +spec: + group: longhorn.rancher.io + names: + kind: Engine + listKind: EngineList + plural: engines + shortNames: + - lhe + singular: engine + scope: Namespaced + version: v1alpha1 +--- +apiVersion: apiextensions.k8s.io/v1beta1 +kind: CustomResourceDefinition +metadata: + labels: + longhorn-manager: Replica + name: replicas.longhorn.rancher.io +spec: + group: longhorn.rancher.io + names: + kind: Replica + listKind: ReplicaList + plural: replicas + shortNames: + - lhr + singular: replica + scope: Namespaced + version: v1alpha1 +--- +apiVersion: apiextensions.k8s.io/v1beta1 +kind: CustomResourceDefinition +metadata: + labels: + longhorn-manager: Setting + name: settings.longhorn.rancher.io +spec: + group: longhorn.rancher.io + names: + kind: Setting + listKind: SettingList + plural: settings + shortNames: + - lhs + singular: setting + scope: Namespaced + version: v1alpha1 +--- +apiVersion: apiextensions.k8s.io/v1beta1 +kind: CustomResourceDefinition +metadata: + labels: + longhorn-manager: Volume + name: volumes.longhorn.rancher.io +spec: + group: longhorn.rancher.io + names: + kind: Volume + listKind: VolumeList + plural: volumes + shortNames: + - lhv + singular: volume + scope: Namespaced + version: v1alpha1 +--- +apiVersion: apiextensions.k8s.io/v1beta1 +kind: CustomResourceDefinition +metadata: + labels: + longhorn-manager: EngineImage + name: engineimages.longhorn.rancher.io +spec: + group: longhorn.rancher.io + names: + kind: EngineImage + listKind: EngineImageList + plural: engineimages + shortNames: + - lhei + singular: engineimage + scope: Namespaced + version: v1alpha1 +--- +apiVersion: apiextensions.k8s.io/v1beta1 +kind: CustomResourceDefinition +metadata: + labels: + longhorn-manager: Node + name: nodes.longhorn.rancher.io +spec: + group: longhorn.rancher.io + names: + kind: Node + listKind: NodeList + plural: nodes + shortNames: + - lhn + singular: node + scope: Namespaced + version: v1alpha1 diff --git a/chart/templates/daemonset-sa.yaml b/chart/templates/daemonset-sa.yaml new file mode 100644 index 0000000..f2c07c8 --- /dev/null +++ b/chart/templates/daemonset-sa.yaml @@ -0,0 +1,94 @@ +apiVersion: apps/v1beta2 +kind: DaemonSet +metadata: + labels: + app: longhorn-manager + name: longhorn-manager + namespace: {{ .Release.Namespace }} +spec: + selector: + matchLabels: + app: longhorn-manager + template: + metadata: + labels: + app: longhorn-manager + spec: + containers: + - name: longhorn-manager + image: "{{ .Values.image.longhorn.manager }}:{{ .Values.image.longhorn.managerTag }}" + imagePullPolicy: Always + securityContext: + privileged: true + command: + - longhorn-manager + - -d + - daemon + - --engine-image + - "{{ .Values.image.longhorn.engine }}:{{ .Values.image.longhorn.engineTag }}" + - --manager-image + - "{{ .Values.image.longhorn.manager }}:{{ .Values.image.longhorn.managerTag }}" + - --service-account + - longhorn-service-account + ports: + - containerPort: 9500 + name: manager + volumeMounts: + - name: dev + mountPath: /host/dev/ + - name: proc + mountPath: /host/proc/ + - name: varrun + mountPath: /var/run/ + - name: longhorn + mountPath: /var/lib/rancher/longhorn/ + mountPropagation: Bidirectional + env: + - name: POD_NAMESPACE + valueFrom: + fieldRef: + fieldPath: metadata.namespace + - name: POD_IP + valueFrom: + fieldRef: + fieldPath: status.podIP + - name: NODE_NAME + valueFrom: + fieldRef: + fieldPath: spec.nodeName + - name: LONGHORN_BACKEND_SVC + value: longhorn-backend + volumes: + - name: dev + hostPath: + path: /dev/ + - name: proc + hostPath: + path: /proc/ + - name: varrun + hostPath: + path: /var/run/ + - name: longhorn + hostPath: + path: /var/lib/rancher/longhorn/ + serviceAccountName: longhorn-service-account +--- +apiVersion: v1 +kind: Service +metadata: + labels: + app: longhorn-manager + name: longhorn-backend + namespace: {{ .Release.Namespace }} +spec: + type: {{ .Values.service.manager.type }} + sessionAffinity: ClientIP + selector: + app: longhorn-manager + ports: + - name: manager + port: 9500 + targetPort: manager + {{- if .Values.service.manager.nodePort }} + nodePort: {{ .Values.service.manager.nodePort }} + {{- end }} diff --git a/chart/templates/deployment-driver.yaml b/chart/templates/deployment-driver.yaml new file mode 100644 index 0000000..d8f7370 --- /dev/null +++ b/chart/templates/deployment-driver.yaml @@ -0,0 +1,61 @@ +apiVersion: apps/v1beta2 +kind: Deployment +metadata: + name: longhorn-driver-deployer + namespace: {{ .Release.Namespace }} +spec: + replicas: 1 + selector: + matchLabels: + app: longhorn-driver-deployer + template: + metadata: + labels: + app: longhorn-driver-deployer + spec: + initContainers: + - name: wait-longhorn-manager + image: "{{ .Values.image.longhorn.manager }}:{{ .Values.image.longhorn.managerTag }}" + command: ['sh', '-c', 'while [ $(curl -m 1 -s -o /dev/null -w "%{http_code}" http://longhorn-backend:9500/v1) != "200" ]; do echo waiting; sleep 2; done'] + containers: + - name: longhorn-driver-deployer + image: "{{ .Values.image.longhorn.manager }}:{{ .Values.image.longhorn.managerTag }}" + imagePullPolicy: Always + command: + - longhorn-manager + - -d + - deploy-driver + - --manager-image + - "{{ .Values.image.longhorn.manager }}:{{ .Values.image.longhorn.managerTag }}" + - --manager-url + - http://longhorn-backend:9500/v1 + - --driver + - "{{ .Values.driver }}" + env: + - name: POD_NAMESPACE + valueFrom: + fieldRef: + fieldPath: metadata.namespace + - name: NODE_NAME + valueFrom: + fieldRef: + fieldPath: spec.nodeName + - name: SERVICE_ACCOUNT + valueFrom: + fieldRef: + fieldPath: spec.serviceAccountName + - name: FLEXVOLUME_DIR + value: {{ .Values.persistence.flexvolumePath }} + {{- if .Values.csi.attacherImage }} + - name: CSI_ATTACHER_IMAGE + value: {{ .Values.csi.attacherImage }} + {{- end }} + {{- if .Values.csi.provisionerImage }} + - name: CSI_PROVISIONER_IMAGE + value: {{ .Values.csi.provisionerImage }} + {{- end }} + {{- if .Values.csi.driverRegistrarImage }} + - name: CSI_DRIVER_REGISTRAR_IMAGE + value: {{ .Values.csi.driverRegistrarImage }} + {{- end }} + serviceAccountName: longhorn-service-account diff --git a/chart/templates/deployment-ui.yaml b/chart/templates/deployment-ui.yaml new file mode 100644 index 0000000..b8e641e --- /dev/null +++ b/chart/templates/deployment-ui.yaml @@ -0,0 +1,52 @@ +apiVersion: apps/v1beta2 +kind: Deployment +metadata: + labels: + app: longhorn-ui + name: longhorn-ui + namespace: {{ .Release.Namespace }} +spec: + replicas: 1 + selector: + matchLabels: + app: longhorn-ui + template: + metadata: + labels: + app: longhorn-ui + spec: + containers: + - name: longhorn-ui + image: "{{ .Values.image.longhorn.ui }}:{{ .Values.image.longhorn.uiTag }}" + ports: + - containerPort: 8000 + name: http + env: + - name: LONGHORN_MANAGER_IP + value: "http://longhorn-backend:9500" +--- +kind: Service +apiVersion: v1 +metadata: + labels: + app: longhorn-ui + {{- if eq .Values.service.ui.type "Rancher-Proxy" }} + kubernetes.io/cluster-service: "true" + {{- end }} + name: longhorn-frontend + namespace: {{ .Release.Namespace }} +spec: + {{- if eq .Values.service.ui.type "Rancher-Proxy" }} + type: ClusterIP + {{- else }} + type: {{ .Values.service.ui.type }} + {{- end }} + selector: + app: longhorn-ui + ports: + - name: http + port: 80 + targetPort: http + {{- if .Values.service.ui.nodePort }} + nodePort: {{ .Values.service.ui.nodePort }} + {{- end }} diff --git a/chart/templates/ingress.yaml b/chart/templates/ingress.yaml new file mode 100644 index 0000000..b940c98 --- /dev/null +++ b/chart/templates/ingress.yaml @@ -0,0 +1,30 @@ +{{- if .Values.ingress.enabled }} +apiVersion: extensions/v1beta1 +kind: Ingress +metadata: + name: longhorn-ingress + labels: + app: longhorn-ingress + annotations: + {{- if .Values.ingress.tls }} + ingress.kubernetes.io/secure-backends: "true" + {{- end }} + {{- range $key, $value := .Values.ingress.annotations }} + {{ $key }}: {{ $value | quote }} + {{- end }} +spec: + rules: + - host: {{ .Values.ingress.host }} + http: + paths: + - path: {{ default "" .Values.ingress.path }} + backend: + serviceName: longhorn-frontend + servicePort: 80 +{{- if .Values.ingress.tls }} + tls: + - hosts: + - {{ .Values.ingress.host }} + secretName: {{ .Values.ingress.tlsSecret }} +{{- end }} +{{- end }} diff --git a/chart/templates/postupgrade-job.yaml b/chart/templates/postupgrade-job.yaml new file mode 100644 index 0000000..00e76e2 --- /dev/null +++ b/chart/templates/postupgrade-job.yaml @@ -0,0 +1,31 @@ +apiVersion: batch/v1 +kind: Job +metadata: + annotations: + "helm.sh/hook": post-upgrade + "helm.sh/hook-delete-policy": hook-succeeded + name: longhorn-post-upgrade + namespace: {{ .Release.Namespace }} +spec: + activeDeadlineSeconds: 900 + backoffLimit: 1 + template: + metadata: + name: longhorn-post-upgrade + spec: + containers: + - name: longhorn-post-upgrade + image: "{{ .Values.image.longhorn.manager }}:{{ .Values.image.longhorn.managerTag }}" + imagePullPolicy: Always + command: + - longhorn-manager + - post-upgrade + - --from-version + - 0.0.1 + env: + - name: POD_NAMESPACE + valueFrom: + fieldRef: + fieldPath: metadata.namespace + restartPolicy: OnFailure + serviceAccountName: longhorn-service-account diff --git a/chart/templates/serviceaccount.yaml b/chart/templates/serviceaccount.yaml new file mode 100644 index 0000000..0bbe9b0 --- /dev/null +++ b/chart/templates/serviceaccount.yaml @@ -0,0 +1,5 @@ +apiVersion: v1 +kind: ServiceAccount +metadata: + name: longhorn-service-account + namespace: {{ .Release.Namespace }} diff --git a/chart/templates/storageclass.yaml b/chart/templates/storageclass.yaml new file mode 100644 index 0000000..c19e6cf --- /dev/null +++ b/chart/templates/storageclass.yaml @@ -0,0 +1,17 @@ +kind: StorageClass +apiVersion: storage.k8s.io/v1 +metadata: + name: longhorn + {{- if .Values.persistence.defaultClass }} + annotations: + storageclass.beta.kubernetes.io/is-default-class: "true" + {{- else }} + annotations: + storageclass.beta.kubernetes.io/is-default-class: "false" + {{- end }} +provisioner: rancher.io/longhorn +parameters: + numberOfReplicas: "3" + staleReplicaTimeout: "30" + fromBackup: "" + baseImage: "" diff --git a/chart/templates/tls-secrets.yaml b/chart/templates/tls-secrets.yaml new file mode 100644 index 0000000..74d0645 --- /dev/null +++ b/chart/templates/tls-secrets.yaml @@ -0,0 +1,15 @@ +{{- if .Values.ingress.enabled }} +{{- range .Values.ingress.secrets }} +apiVersion: v1 +kind: Secret +metadata: + name: longhorn + labels: + app: longhorn +type: kubernetes.io/tls +data: + tls.crt: {{ .certificate | b64enc }} + tls.key: {{ .key | b64enc }} +--- +{{- end }} +{{- end }} diff --git a/chart/templates/uninstall-job.yaml b/chart/templates/uninstall-job.yaml new file mode 100644 index 0000000..1f62b7c --- /dev/null +++ b/chart/templates/uninstall-job.yaml @@ -0,0 +1,30 @@ +apiVersion: batch/v1 +kind: Job +metadata: + annotations: + "helm.sh/hook": pre-delete + "helm.sh/hook-delete-policy": hook-succeeded + name: longhorn-uninstall + namespace: {{ .Release.Namespace }} +spec: + activeDeadlineSeconds: 900 + backoffLimit: 1 + template: + metadata: + name: longhorn-uninstall + spec: + containers: + - name: longhorn-uninstall + image: "{{ .Values.image.longhorn.manager }}:{{ .Values.image.longhorn.managerTag }}" + imagePullPolicy: Always + command: + - longhorn-manager + - uninstall + - --force + env: + - name: POD_NAMESPACE + valueFrom: + fieldRef: + fieldPath: metadata.namespace + restartPolicy: OnFailure + serviceAccountName: longhorn-service-account diff --git a/chart/values.yaml b/chart/values.yaml new file mode 100644 index 0000000..b4bcb77 --- /dev/null +++ b/chart/values.yaml @@ -0,0 +1,87 @@ +# Default values for longhorn. +# This is a YAML-formatted file. +# Declare variables to be passed into your templates. +image: + longhorn: + engine: rancher/longhorn-engine + engineTag: v0.4.1 + manager: rancher/longhorn-manager + managerTag: v0.4.1 + ui: rancher/longhorn-ui + uiTag: v0.4.1 + pullPolicy: IfNotPresent + +service: + ui: + type: LoadBalancer + nodePort: "" + manager: + type: ClusterIP + nodePort: "" + +# deploy either 'flexvolume' or 'csi' driver +driver: csi + +persistence: + # for GKE uses /home/kubernetes/flexvolume/ instead, User can find the correct directory by running ps aux|grep kubelet on the host and check the --volume-plugin-dir parameter. + # If there is none, the default /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ will be used. + flexvolumePath: + defaultClass: true + +csi: + attacherImage: + provisionerImage: + driverRegistrarImage: + +resources: {} + # We usually recommend not to specify default resources and to leave this as a conscious + # choice for the user. This also increases chances charts run on environments with little + # resources, such as Minikube. If you do want to specify resources, uncomment the following + # lines, adjust them as necessary, and remove the curly braces after 'resources:'. + # limits: + # cpu: 100m + # memory: 128Mi + # requests: + # cpu: 100m + # memory: 128Mi + # + +ingress: + ## Set to true to enable ingress record generation + enabled: false + + + host: xip.io + + ## Set this to true in order to enable TLS on the ingress record + ## A side effect of this will be that the backend service will be connected at port 443 + tls: false + + ## If TLS is set to true, you must declare what secret will store the key/certificate for TLS + tlsSecret: longhorn.local-tls + + ## Ingress annotations done as key:value pairs + ## If you're using kube-lego, you will want to add: + ## kubernetes.io/tls-acme: true + ## + ## For a full list of possible ingress annotations, please see + ## ref: https://github.com/kubernetes/ingress-nginx/blob/master/docs/annotations.md + ## + ## If tls is set to true, annotation ingress.kubernetes.io/secure-backends: "true" will automatically be set + annotations: + # kubernetes.io/ingress.class: nginx + # kubernetes.io/tls-acme: true + + secrets: + ## If you're providing your own certificates, please use this to add the certificates as secrets + ## key and certificate should start with -----BEGIN CERTIFICATE----- or + ## -----BEGIN RSA PRIVATE KEY----- + ## + ## name should line up with a tlsSecret set further up + ## If you're using kube-lego, this is unneeded, as it will create the secret for you if it is not set + ## + ## It is also possible to create and manage the certificates outside of this helm chart + ## Please see README.md for more information + # - name: longhorn.local-tls + # key: + # certificate: From 4195f1c0f4953cf370af45f1bda6af7a4d5b3d60 Mon Sep 17 00:00:00 2001 From: Shuo Wu Date: Wed, 8 May 2019 10:25:09 -0700 Subject: [PATCH 06/22] chart: arg kubelet root-dir became configurable --- chart/questions.yml | 5 +++++ chart/templates/deployment-driver.yaml | 4 ++++ chart/values.yaml | 1 + 3 files changed, 10 insertions(+) diff --git a/chart/questions.yml b/chart/questions.yml index 4f97f1c..7ae1c6f 100644 --- a/chart/questions.yml +++ b/chart/questions.yml @@ -37,6 +37,11 @@ questions: description: "Specify CSI Driver Registrar image. Leave blank to autodetect." type: string label: Longhorn CSI Driver Registrar Image + - variable: csi.kubeletRootDir + default: + description: "Specify kubelet root-dir. Leave blank to autodetect." + type: string + label: Kubelet Root Directory - variable: persistence.defaultClass default: "true" description: "Set as default StorageClass" diff --git a/chart/templates/deployment-driver.yaml b/chart/templates/deployment-driver.yaml index d8f7370..25eb381 100644 --- a/chart/templates/deployment-driver.yaml +++ b/chart/templates/deployment-driver.yaml @@ -46,6 +46,10 @@ spec: fieldPath: spec.serviceAccountName - name: FLEXVOLUME_DIR value: {{ .Values.persistence.flexvolumePath }} + {{- if .Values.csi.kubeletRootDir }} + - name: KUBELET_ROOT_DIR + value: {{ .Values.csi.kubeletRootDir }} + {{- end }} {{- if .Values.csi.attacherImage }} - name: CSI_ATTACHER_IMAGE value: {{ .Values.csi.attacherImage }} diff --git a/chart/values.yaml b/chart/values.yaml index b4bcb77..8c61b25 100644 --- a/chart/values.yaml +++ b/chart/values.yaml @@ -32,6 +32,7 @@ csi: attacherImage: provisionerImage: driverRegistrarImage: + kubeletRootDir: resources: {} # We usually recommend not to specify default resources and to leave this as a conscious From d5a34b20ad9f5bfb5f3145c6a613a8554f17a92e Mon Sep 17 00:00:00 2001 From: Shuo Wu Date: Wed, 8 May 2019 10:27:43 -0700 Subject: [PATCH 07/22] chart: csi attacher and provisioner replica count became configurable --- chart/questions.yml | 14 ++++++++++++++ chart/templates/deployment-driver.yaml | 8 ++++++++ chart/values.yaml | 2 ++ 3 files changed, 24 insertions(+) diff --git a/chart/questions.yml b/chart/questions.yml index 7ae1c6f..08b7379 100644 --- a/chart/questions.yml +++ b/chart/questions.yml @@ -42,6 +42,20 @@ questions: description: "Specify kubelet root-dir. Leave blank to autodetect." type: string label: Kubelet Root Directory + - variable: csi.attacherReplicaCount + type: int + default: + min: 1 + max: 10 + description: "Specify number of CSI Attacher replica. By default 3." + label: Longhorn CSI Attacher replica count + - variable: csi.provisionerReplicaCount + type: int + default: + min: 1 + max: 10 + description: "Specify number of CSI Provisioner replica. By default 3." + label: Longhorn CSI Provisioner replica count - variable: persistence.defaultClass default: "true" description: "Set as default StorageClass" diff --git a/chart/templates/deployment-driver.yaml b/chart/templates/deployment-driver.yaml index 25eb381..1d21924 100644 --- a/chart/templates/deployment-driver.yaml +++ b/chart/templates/deployment-driver.yaml @@ -62,4 +62,12 @@ spec: - name: CSI_DRIVER_REGISTRAR_IMAGE value: {{ .Values.csi.driverRegistrarImage }} {{- end }} + {{- if .Values.csi.attacherReplicaCount }} + - name: CSI_ATTACHER_REPLICA_COUNT + value: "{{ .Values.csi.attacherReplicaCount }}" + {{- end }} + {{- if .Values.csi.provisionerReplicaCount }} + - name: CSI_PROVISIONER_REPLICA_COUNT + value: "{{ .Values.csi.provisionerReplicaCount }}" + {{- end }} serviceAccountName: longhorn-service-account diff --git a/chart/values.yaml b/chart/values.yaml index 8c61b25..9661fb3 100644 --- a/chart/values.yaml +++ b/chart/values.yaml @@ -33,6 +33,8 @@ csi: provisionerImage: driverRegistrarImage: kubeletRootDir: + attacherReplicaCount: + provisionerReplicaCount: resources: {} # We usually recommend not to specify default resources and to leave this as a conscious From a13bbac74bbae07521032fdc616bcc0d39932ebc Mon Sep 17 00:00:00 2001 From: Shuo Wu Date: Wed, 8 May 2019 14:05:55 -0700 Subject: [PATCH 08/22] chart: longhorn default storageClass replica count became configurable --- chart/questions.yml | 8 ++++++++ chart/templates/storageclass.yaml | 2 +- chart/values.yaml | 1 + 3 files changed, 10 insertions(+), 1 deletion(-) diff --git a/chart/questions.yml b/chart/questions.yml index 08b7379..6f29fca 100644 --- a/chart/questions.yml +++ b/chart/questions.yml @@ -63,6 +63,14 @@ questions: type: boolean required: true label: Default Storage Class +- variable: persistence.defaultClassReplicaCount + description: "Set replica count for default StorageClass" + group: "Longhorn Settings" + type: int + default: 3 + min: 1 + max: 10 + label: Default Storage Class Replica Count - variable: ingress.enabled default: "false" description: "Expose app using Layer 7 Load Balancer - ingress" diff --git a/chart/templates/storageclass.yaml b/chart/templates/storageclass.yaml index c19e6cf..71253ad 100644 --- a/chart/templates/storageclass.yaml +++ b/chart/templates/storageclass.yaml @@ -11,7 +11,7 @@ metadata: {{- end }} provisioner: rancher.io/longhorn parameters: - numberOfReplicas: "3" + numberOfReplicas: "{{ .Values.persistence.defaultClassReplicaCount }}" staleReplicaTimeout: "30" fromBackup: "" baseImage: "" diff --git a/chart/values.yaml b/chart/values.yaml index 9661fb3..7e2e5cc 100644 --- a/chart/values.yaml +++ b/chart/values.yaml @@ -27,6 +27,7 @@ persistence: # If there is none, the default /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ will be used. flexvolumePath: defaultClass: true + defaultClassReplicaCount: 3 csi: attacherImage: From 2639a4e4d7e9918e53f23dcb349a2271b8384e7a Mon Sep 17 00:00:00 2001 From: Shuo Wu Date: Wed, 8 May 2019 10:30:11 -0700 Subject: [PATCH 09/22] chart: updated uninstaller yaml --- chart/templates/uninstall-job.yaml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/chart/templates/uninstall-job.yaml b/chart/templates/uninstall-job.yaml index 1f62b7c..0adc2a4 100644 --- a/chart/templates/uninstall-job.yaml +++ b/chart/templates/uninstall-job.yaml @@ -22,7 +22,7 @@ spec: - uninstall - --force env: - - name: POD_NAMESPACE + - name: LONGHORN_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace From ad3d52d531b34806bafa18dcfb6aac5234967cda Mon Sep 17 00:00:00 2001 From: Shuo Wu Date: Wed, 8 May 2019 11:12:58 -0700 Subject: [PATCH 10/22] chart: add installation guide and related config yaml file for chart --- README.md | 32 +++++++++++++++++++++++++++++--- 1 file changed, 29 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index 55f2236..d27a2af 100644 --- a/README.md +++ b/README.md @@ -56,14 +56,23 @@ If there is a new version of Longhorn available, you will see an `Upgrade Availa ## On any Kubernetes cluster +### Install Longhorn with kubectl You can install Longhorn on any Kubernetes cluster using following command: ``` kubectl apply -f https://raw.githubusercontent.com/rancher/longhorn/master/deploy/longhorn.yaml ``` - Google Kubernetes Engine (GKE) requires additional setup in order for Longhorn to function properly. If your are a GKE user, read [this page](docs/gke.md) before proceeding. +### Install Longhorn with Helm +First, you need to initialize Helm locally and [install Tiller into your Kubernetes cluster with RBAC](https://helm.sh/docs/using_helm/#role-based-access-control). +Then install longhorn: +``` +helm install https://raw.githubusercontent.com/rancher/longhorn/master/chart --name longhorn --namespace longhorn-system +``` + +--- + Longhorn will be installed in the namespace `longhorn-system` One of the two available drivers (CSI and Flexvolume) would be chosen automatically based on the version of Kubernetes you use. See [here](docs/driver.md) for details. @@ -108,8 +117,19 @@ Since v0.3.3, Longhorn is able to perform fully-automated non-disruptive upgrade If you're upgrading from Longhorn v0.3.0 or newer: -1. Follow [the same steps for installation](#install) to upgrade Longhorn manager -2. After upgraded manager, follow [the steps here](docs/upgrade.md#upgrade-longhorn-engine) to upgrade Longhorn engine for existing volumes. +## Upgrade Longhorn manager + +##### On Kubernetes clusters Managed by Rancher 2.1 or newer +Follow [the same steps for installation](#install) to upgrade Longhorn manager + +##### Using kubectl +`kubectl apply -f https://raw.githubusercontent.com/rancher/longhorn/master/deploy/longhorn.yaml` + +##### Using Helm +`helm upgrade longhorn https://raw.githubusercontent.com/rancher/longhorn/master/chart` + +## Upgrade Longhorn engine +After upgraded manager, follow [the steps here](docs/upgrade.md#upgrade-longhorn-engine) to upgrade Longhorn engine for existing volumes. 1. For non distruptive upgrade, follow [the live upgrade steps here](./docs/upgrade.md#live-upgrade) For more details about upgrade in Longhorn or upgrade from older versions, [see here](docs/upgrade.md). @@ -188,6 +208,7 @@ See [here](./docs/troubleshooting.md) for the troubleshooting guide. # Uninstall Longhorn +### Using kubectl 1. To prevent damaging the Kubernetes cluster, we recommend deleting all Kubernetes workloads using Longhorn volumes (PersistentVolume, PersistentVolumeClaim, StorageClass, Deployment, StatefulSet, DaemonSet, etc) first. 2. Create the uninstallation job to clean up CRDs from the system and wait for success: @@ -220,6 +241,11 @@ longhorn-uninstall 1/1 20s 20s Tip: If you try `kubectl delete -f https://raw.githubusercontent.com/rancher/longhorn/master/deploy/longhorn.yaml` first and get stuck there, pressing `Ctrl C` then running `kubectl create -f https://raw.githubusercontent.com/rancher/longhorn/master/uninstall/uninstall.yaml` can also help you remove Longhorn. Finally, don't forget to cleanup remaining components. +### Using Helm +``` +helm delete longhorn --purge +``` + ## License Copyright (c) 2014-2019 [Rancher Labs, Inc.](http://rancher.com/) From e0775905bf4ccff247cd40e468d0259ff2161d6e Mon Sep 17 00:00:00 2001 From: Shuo Wu Date: Wed, 8 May 2019 20:57:59 -0700 Subject: [PATCH 11/22] chart: updated README.md --- README.md | 18 ++++++++++++++---- 1 file changed, 14 insertions(+), 4 deletions(-) diff --git a/README.md b/README.md index d27a2af..8540e1f 100644 --- a/README.md +++ b/README.md @@ -66,9 +66,15 @@ Google Kubernetes Engine (GKE) requires additional setup in order for Longhorn t ### Install Longhorn with Helm First, you need to initialize Helm locally and [install Tiller into your Kubernetes cluster with RBAC](https://helm.sh/docs/using_helm/#role-based-access-control). -Then install longhorn: + +Then download Longhorn repository: ``` -helm install https://raw.githubusercontent.com/rancher/longhorn/master/chart --name longhorn --namespace longhorn-system +git clone https://github.com/rancher/longhorn.git +``` + +Now using following command to install Longhorn: +``` +helm install ./longhorn/chart --name longhorn --namespace longhorn-system ``` --- @@ -123,10 +129,14 @@ If you're upgrading from Longhorn v0.3.0 or newer: Follow [the same steps for installation](#install) to upgrade Longhorn manager ##### Using kubectl -`kubectl apply -f https://raw.githubusercontent.com/rancher/longhorn/master/deploy/longhorn.yaml` +``` +kubectl apply -f https://raw.githubusercontent.com/rancher/longhorn/master/deploy/longhorn.yaml` +``` ##### Using Helm -`helm upgrade longhorn https://raw.githubusercontent.com/rancher/longhorn/master/chart` +``` +helm upgrade longhorn ./longhorn/chart +``` ## Upgrade Longhorn engine After upgraded manager, follow [the steps here](docs/upgrade.md#upgrade-longhorn-engine) to upgrade Longhorn engine for existing volumes. From 0f6450e1a940bad90e25940396d84b8f88df3642 Mon Sep 17 00:00:00 2001 From: Sheng Yang Date: Tue, 14 May 2019 09:01:32 -0700 Subject: [PATCH 12/22] Sync with Manager commit 65d3426cae52a2da7d4af4ba6f32fd3868d9711d Author: Sheng Yang Date: Tue May 14 08:56:14 2019 -0700 Longhorn v0.5.0-rc1 release --- deploy/longhorn.yaml | 23 ++++++++++++----- examples/storageclass.yaml | 2 ++ uninstall/uninstall.yaml | 53 +++++++++++++++++++++++++++++++++----- 3 files changed, 64 insertions(+), 14 deletions(-) diff --git a/deploy/longhorn.yaml b/deploy/longhorn.yaml index 62c4884..83f6744 100644 --- a/deploy/longhorn.yaml +++ b/deploy/longhorn.yaml @@ -181,7 +181,7 @@ spec: spec: containers: - name: longhorn-manager - image: rancher/longhorn-manager:v0.4.1 + image: rancher/longhorn-manager:v0.5.0-rc1 imagePullPolicy: Always securityContext: privileged: true @@ -190,9 +190,9 @@ spec: - -d - daemon - --engine-image - - rancher/longhorn-engine:v0.4.1 + - rancher/longhorn-engine:v0.5.0-rc1 - --manager-image - - rancher/longhorn-manager:v0.4.1 + - rancher/longhorn-manager:v0.5.0-rc1 - --service-account - longhorn-service-account ports: @@ -269,7 +269,7 @@ spec: spec: containers: - name: longhorn-ui - image: rancher/longhorn-ui:v0.4.1 + image: rancher/longhorn-ui:v0.5.0-rc1 ports: - containerPort: 8000 env: @@ -308,26 +308,35 @@ spec: spec: initContainers: - name: wait-longhorn-manager - image: rancher/longhorn-manager:v0.4.1 + image: rancher/longhorn-manager:v0.5.0-rc1 command: ['sh', '-c', 'while [ $(curl -m 1 -s -o /dev/null -w "%{http_code}" http://longhorn-backend:9500/v1) != "200" ]; do echo waiting; sleep 2; done'] containers: - name: longhorn-driver-deployer - image: rancher/longhorn-manager:v0.4.1 + image: rancher/longhorn-manager:v0.5.0-rc1 imagePullPolicy: Always command: - longhorn-manager - -d - deploy-driver - --manager-image - - rancher/longhorn-manager:v0.4.1 + - rancher/longhorn-manager:v0.5.0-rc1 - --manager-url - http://longhorn-backend:9500/v1 # manually choose "flexvolume" or "csi" #- --driver #- flexvolume + # manually set root directory for flexvolume + #- --flexvolume-dir + #- /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ # manually set root directory for csi #- --kubelet-root-dir #- /var/lib/rancher/k3s/agent/kubelet + # manually specify number of CSI attacher replicas + #- --csi-attacher-replica-count + #- "3" + # manually specify number of CSI provisioner replicas + #- --csi-provisioner-replica-count + #- "3" env: - name: POD_NAMESPACE valueFrom: diff --git a/examples/storageclass.yaml b/examples/storageclass.yaml index a7e813d..3d5e5bb 100644 --- a/examples/storageclass.yaml +++ b/examples/storageclass.yaml @@ -7,3 +7,5 @@ parameters: numberOfReplicas: "3" staleReplicaTimeout: "30" fromBackup: "" +# recurringJobs: '[{"name":"snap", "task":"snapshot", "cron":"*/1 * * * *", "retain":1}, +# {"name":"backup", "task":"backup", "cron":"*/2 * * * *", "retain":1}]' \ No newline at end of file diff --git a/uninstall/uninstall.yaml b/uninstall/uninstall.yaml index 64c9209..ca12a17 100644 --- a/uninstall/uninstall.yaml +++ b/uninstall/uninstall.yaml @@ -1,8 +1,49 @@ +apiVersion: v1 +kind: ServiceAccount +metadata: + name: longhorn-uninstall-service-account +--- +apiVersion: rbac.authorization.k8s.io/v1beta1 +kind: ClusterRole +metadata: + name: longhorn-uninstall-role +rules: + - apiGroups: + - apiextensions.k8s.io + resources: + - customresourcedefinitions + verbs: + - "*" + - apiGroups: [""] + resources: ["pods", "persistentvolumes", "persistentvolumeclaims"] + verbs: ["*"] + - apiGroups: ["apps"] + resources: ["daemonsets", "statefulsets", "deployments"] + verbs: ["*"] + - apiGroups: ["batch"] + resources: ["jobs", "cronjobs"] + verbs: ["*"] + - apiGroups: ["longhorn.rancher.io"] + resources: ["volumes", "engines", "replicas", "settings", "engineimages", "nodes"] + verbs: ["*"] +--- +apiVersion: rbac.authorization.k8s.io/v1beta1 +kind: ClusterRoleBinding +metadata: + name: longhorn-uninstall-bind +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: longhorn-uninstall-role +subjects: + - kind: ServiceAccount + name: longhorn-uninstall-service-account + namespace: default +--- apiVersion: batch/v1 kind: Job metadata: name: longhorn-uninstall - namespace: longhorn-system spec: activeDeadlineSeconds: 900 backoffLimit: 1 @@ -12,16 +53,14 @@ spec: spec: containers: - name: longhorn-uninstall - image: rancher/longhorn-manager:v0.4.1 + image: rancher/longhorn-manager:v0.5.0-rc1 imagePullPolicy: Always command: - longhorn-manager - uninstall - --force env: - - name: POD_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace + - name: LONGHORN_NAMESPACE + value: longhorn-system restartPolicy: OnFailure - serviceAccountName: longhorn-service-account + serviceAccountName: longhorn-uninstall-service-account From 90f4ba0aee4ebbed286e80a3381cd64bf3030121 Mon Sep 17 00:00:00 2001 From: Sheng Yang Date: Tue, 14 May 2019 09:17:02 -0700 Subject: [PATCH 13/22] Update Helm chart to v0.5.0-rc1 --- chart/Chart.yaml | 4 ++-- chart/values.yaml | 6 +++--- 2 files changed, 5 insertions(+), 5 deletions(-) diff --git a/chart/Chart.yaml b/chart/Chart.yaml index 8871258..c64384f 100644 --- a/chart/Chart.yaml +++ b/chart/Chart.yaml @@ -1,7 +1,7 @@ apiVersion: v1 name: longhorn -version: 0.4.1 -appVersion: v0.4.1 +version: 0.5.0-rc1 +appVersion: v0.5.0-rc1 kubeVersion: ">=v1.8.0-r0" description: Longhorn is a distributed block storage system for Kubernetes powered by Rancher Labs. keywords: diff --git a/chart/values.yaml b/chart/values.yaml index 7e2e5cc..a10e65a 100644 --- a/chart/values.yaml +++ b/chart/values.yaml @@ -4,11 +4,11 @@ image: longhorn: engine: rancher/longhorn-engine - engineTag: v0.4.1 + engineTag: v0.5.0-rc1 manager: rancher/longhorn-manager - managerTag: v0.4.1 + managerTag: v0.5.0-rc1 ui: rancher/longhorn-ui - uiTag: v0.4.1 + uiTag: v0.5.0-rc1 pullPolicy: IfNotPresent service: From 209dc665e39cb1d493e6581178c50b7b1266b612 Mon Sep 17 00:00:00 2001 From: Sheng Yang Date: Thu, 16 May 2019 12:09:59 -0700 Subject: [PATCH 14/22] Sync with manager commit 0b81510dea8df739ae1b6c5c62531615e052560e Author: Sheng Yang Date: Thu May 16 12:05:45 2019 -0700 Longhorn v0.5.0-rc2 release --- deploy/longhorn.yaml | 14 +++++++------- uninstall/uninstall.yaml | 2 +- 2 files changed, 8 insertions(+), 8 deletions(-) diff --git a/deploy/longhorn.yaml b/deploy/longhorn.yaml index 83f6744..17107e4 100644 --- a/deploy/longhorn.yaml +++ b/deploy/longhorn.yaml @@ -181,7 +181,7 @@ spec: spec: containers: - name: longhorn-manager - image: rancher/longhorn-manager:v0.5.0-rc1 + image: rancher/longhorn-manager:v0.5.0-rc2 imagePullPolicy: Always securityContext: privileged: true @@ -190,9 +190,9 @@ spec: - -d - daemon - --engine-image - - rancher/longhorn-engine:v0.5.0-rc1 + - rancher/longhorn-engine:v0.5.0-rc2 - --manager-image - - rancher/longhorn-manager:v0.5.0-rc1 + - rancher/longhorn-manager:v0.5.0-rc2 - --service-account - longhorn-service-account ports: @@ -269,7 +269,7 @@ spec: spec: containers: - name: longhorn-ui - image: rancher/longhorn-ui:v0.5.0-rc1 + image: rancher/longhorn-ui:v0.5.0-rc2 ports: - containerPort: 8000 env: @@ -308,18 +308,18 @@ spec: spec: initContainers: - name: wait-longhorn-manager - image: rancher/longhorn-manager:v0.5.0-rc1 + image: rancher/longhorn-manager:v0.5.0-rc2 command: ['sh', '-c', 'while [ $(curl -m 1 -s -o /dev/null -w "%{http_code}" http://longhorn-backend:9500/v1) != "200" ]; do echo waiting; sleep 2; done'] containers: - name: longhorn-driver-deployer - image: rancher/longhorn-manager:v0.5.0-rc1 + image: rancher/longhorn-manager:v0.5.0-rc2 imagePullPolicy: Always command: - longhorn-manager - -d - deploy-driver - --manager-image - - rancher/longhorn-manager:v0.5.0-rc1 + - rancher/longhorn-manager:v0.5.0-rc2 - --manager-url - http://longhorn-backend:9500/v1 # manually choose "flexvolume" or "csi" diff --git a/uninstall/uninstall.yaml b/uninstall/uninstall.yaml index ca12a17..2e0200c 100644 --- a/uninstall/uninstall.yaml +++ b/uninstall/uninstall.yaml @@ -53,7 +53,7 @@ spec: spec: containers: - name: longhorn-uninstall - image: rancher/longhorn-manager:v0.5.0-rc1 + image: rancher/longhorn-manager:v0.5.0-rc2 imagePullPolicy: Always command: - longhorn-manager From cb381474a90aa8f247fa0d8b1726364276b0d9f5 Mon Sep 17 00:00:00 2001 From: Sheng Yang Date: Thu, 16 May 2019 12:12:16 -0700 Subject: [PATCH 15/22] Update chart to v0.5.0-rc2 --- chart/Chart.yaml | 4 ++-- chart/values.yaml | 6 +++--- 2 files changed, 5 insertions(+), 5 deletions(-) diff --git a/chart/Chart.yaml b/chart/Chart.yaml index c64384f..da6e1ac 100644 --- a/chart/Chart.yaml +++ b/chart/Chart.yaml @@ -1,7 +1,7 @@ apiVersion: v1 name: longhorn -version: 0.5.0-rc1 -appVersion: v0.5.0-rc1 +version: 0.5.0-rc2 +appVersion: v0.5.0-rc2 kubeVersion: ">=v1.8.0-r0" description: Longhorn is a distributed block storage system for Kubernetes powered by Rancher Labs. keywords: diff --git a/chart/values.yaml b/chart/values.yaml index a10e65a..9a8abf4 100644 --- a/chart/values.yaml +++ b/chart/values.yaml @@ -4,11 +4,11 @@ image: longhorn: engine: rancher/longhorn-engine - engineTag: v0.5.0-rc1 + engineTag: v0.5.0-rc2 manager: rancher/longhorn-manager - managerTag: v0.5.0-rc1 + managerTag: v0.5.0-rc2 ui: rancher/longhorn-ui - uiTag: v0.5.0-rc1 + uiTag: v0.5.0-rc2 pullPolicy: IfNotPresent service: From 5a7a3cb755aa47c5d61b53974ca018046607efd2 Mon Sep 17 00:00:00 2001 From: Shuo Wu Date: Thu, 16 May 2019 21:20:35 +0000 Subject: [PATCH 16/22] Add doc for Kubernetes integration feature Longhorn issue 536 --- README.md | 1 + docs/k8s-workload.md | 39 +++++++++++++++++++++++++++++++++++++++ 2 files changed, 40 insertions(+) create mode 100644 docs/k8s-workload.md diff --git a/README.md b/README.md index 8540e1f..0ca5802 100644 --- a/README.md +++ b/README.md @@ -204,6 +204,7 @@ More examples are available at `./examples/` ### [Multiple disks](./docs/multidisk.md) ### [iSCSI](./docs/iscsi.md) ### [Base image](./docs/base-image.md) +### [Kubernetes workload in Longhorn UI](./docs/k8s-workload.md) ### [Restoring Stateful Set volumes](./docs/restore_statefulset.md) ### [Google Kubernetes Engine](./docs/gke.md) diff --git a/docs/k8s-workload.md b/docs/k8s-workload.md new file mode 100644 index 0000000..08ae873 --- /dev/null +++ b/docs/k8s-workload.md @@ -0,0 +1,39 @@ +# Workload identification for volume +Now users can identify current workloads or workload history for existing Longhorn volumes. +``` +PV Name: test1-pv +PV Status: Bound + +Namespace: default +PVC Name: test1-pvc + +Last Pod Name: volume-test-1 +Last Pod Status: Running +Last Workload Name: volume-test +Last Workload Type: Statefulset +Last time used by Pod: a few seconds ago +``` + +## About historical status +There are a few fields can contain the historical status instead of the current status. +Those fields can be used to help users figuring out which workload has used the volume in the past: + +1. `Last time bound with PVC`: If this field is set, it indicates currently there is no bounded PVC for this volume. +The related fields will show the most recent bounded PVC. +2. `Last time used by Pod`: If these fields are set, they indicates currently there is no workload using this volume. +The related fields will show the most recent workload using this volume. + +# PV/PVC creation for existing Longhorn volume +Now users can create PV/PVC via our Longhorn UI for the existing Longhorn volumes. +Only detached volume can be used by newly created pod. + +## About special fields of PV/PVC +Since the Longhorn volume already exists while creating PV/PVC, StorageClass is not needed for dynamically provisioning +Longhorn volume. However, the field `storageClassName` would be set in PVC/PV, to be used for PVC bounding purpose. And +it's unnecessary for users create the related StorageClass object. + +By default the StorageClass for Longhorn created PV/PVC is `longhorn-static`. Users can modified it in +`Setting - General - Default Longhorn Static StorageClass Name` as they need. + +Users need to manually delete PVC and PV created by Longhorn. + From e7c9e7c197747f14a98581db16373b8a71e83eeb Mon Sep 17 00:00:00 2001 From: Shuo Wu Date: Fri, 17 May 2019 18:15:03 +0000 Subject: [PATCH 17/22] Add doc for disaster recovery volume Longhorn issue 535 --- README.md | 1 + docs/dr-volume.md | 53 +++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 54 insertions(+) create mode 100644 docs/dr-volume.md diff --git a/README.md b/README.md index 0ca5802..a745512 100644 --- a/README.md +++ b/README.md @@ -211,6 +211,7 @@ More examples are available at `./examples/` ### [Deal with Kubernetes node failure](./docs/node-failure.md) ### [Use CSI driver on RancherOS/CoreOS + RKE or K3S](./docs/csi-config.md) ### [Restore a backup to an image file](./docs/restore-to-file.md) +### [Disaster Recovery Volume](./docs/dr-volume.md) # Troubleshooting You can click `Generate Support Bundle` link at the bottom of the UI to download a zip file contains Longhorn related configuration and logs. diff --git a/docs/dr-volume.md b/docs/dr-volume.md new file mode 100644 index 0000000..afdf0b5 --- /dev/null +++ b/docs/dr-volume.md @@ -0,0 +1,53 @@ +# Disaster Recovery Volume +## What is Disaster Recovery Volume? +To increase the resiliency of the volume, Longhorn supports disaster recovery volume. + +The disaster recovery volume is designed for the backup cluster in the case of the whole main cluster goes down. +A disaster recovery volume is normally in standby mode. User would need to activate it before using it as a normal volume. +A disaster recovery volume can be created from a volume's backup in the backup store. And Longhorn will monitor its +original backup volume and incrementally restore from the latest backup. Once the original volume in the main cluster goes +down and users decide to activate the disaster recovery volume in the backup cluster, the disaster recovery volume can be +activated immediately in the most condition, so it will greatly reduced the time needed to restore the data from the +backup store to the volume in the backup cluster. + +## How to create Disaster Recovery Volume? +1. In the cluster A, make sure the original volume X has backup created or recurring backup scheduling. +2. Set backup target in cluster B to be same as cluster A's. +3. In backup page of cluster B, choose the backup volume X then create disaster recovery volume Y. It's highly recommended +to use backup volume name as disaster volume name. +4. Attach the disaster recovery volume Y to any node. Then Longhorn will automatically polling for the last backup of the +volume X, and incrementally restore it to the volume Y. +5. If volume X is down, users can activate volume Y immediately. Once activated, volume Y will become a +normal Longhorn volume. + 5.1. Notice that deactivate a normal volume is not allowed. + +## About Activating Disaster Recovery Volume +1. A disaster recovery volume doesn't support creating/deleting/reverting snapshot, creating backup, creating +PV/PVC. Users cannot update `Backup Target` in Settings if any disaster recovery volumes exist. + +2. When users try to activate a disaster recovery volume, Longhorn will check the last backup of the original volume. If +it hasn't been restored, the restoration will be started, and the activate action will fail. Users need to wait for +the restoration to complete before retrying. + +3. For disaster recovery volume, `Last Backup` indicates the most recent backup of its original backup volume. If the icon +representing disaster volume is gray, it means the volume is restoring `Last Backup` and users cannot activate this +volume right now; if the icon is blue, it means the volume has restored the `Last Backup`. + +## RPO and RTO +Typically incremental restoration is triggered by the periodic backup store update. Users can set backup store update +interval in `Setting - General - Backupstore Poll Interval`. Notice that this interval can potentially impact +Recovery Time Objective(RTO). If it is too long, there may be a large amount of data for the disaster recovery volume to +restore, which will take a long time. As for Recovery Point Objective(RPO), it is determined by recurring backup +scheduling of the backup volume. You can check [here](snapshot-backup.md) to see how to set recurring backup in Longhorn. + +e.g.: + +If recurring backup scheduling for normal volume A is creating backup every hour, then RPO is 1 hour. + +Assuming the volume creates backup every hour, and incrementally restoring data of one backup takes 5 minutes. + +If `Backupstore Poll Interval` is 30 minutes, then there will be at most one backup worth of data since last restoration. +The time for restoring one backup is 5 minute, so RTO is 5 minutes. + +If `Backupstore Poll Interval` is 12 hours, then there will be at most 12 backups worth of data since last restoration. +The time for restoring the backups is 5 * 12 = 60 minutes, so RTO is 60 minutes. \ No newline at end of file From f4c0650d61c7eebca922c7bc193aad8216425d19 Mon Sep 17 00:00:00 2001 From: Sheng Yang Date: Fri, 17 May 2019 22:48:56 -0700 Subject: [PATCH 18/22] Sync with manager: commit e68fac3fcc898bb5854892fc8ec3d4d1cf91ce71 Author: Sheng Yang Date: Fri May 17 22:44:25 2019 -0700 Longhorn v0.5.0-rc3 release --- deploy/longhorn.yaml | 14 +++++++------- uninstall/uninstall.yaml | 2 +- 2 files changed, 8 insertions(+), 8 deletions(-) diff --git a/deploy/longhorn.yaml b/deploy/longhorn.yaml index 17107e4..906da59 100644 --- a/deploy/longhorn.yaml +++ b/deploy/longhorn.yaml @@ -181,7 +181,7 @@ spec: spec: containers: - name: longhorn-manager - image: rancher/longhorn-manager:v0.5.0-rc2 + image: rancher/longhorn-manager:v0.5.0-rc3 imagePullPolicy: Always securityContext: privileged: true @@ -190,9 +190,9 @@ spec: - -d - daemon - --engine-image - - rancher/longhorn-engine:v0.5.0-rc2 + - rancher/longhorn-engine:v0.5.0-rc3 - --manager-image - - rancher/longhorn-manager:v0.5.0-rc2 + - rancher/longhorn-manager:v0.5.0-rc3 - --service-account - longhorn-service-account ports: @@ -269,7 +269,7 @@ spec: spec: containers: - name: longhorn-ui - image: rancher/longhorn-ui:v0.5.0-rc2 + image: rancher/longhorn-ui:v0.5.0-rc3 ports: - containerPort: 8000 env: @@ -308,18 +308,18 @@ spec: spec: initContainers: - name: wait-longhorn-manager - image: rancher/longhorn-manager:v0.5.0-rc2 + image: rancher/longhorn-manager:v0.5.0-rc3 command: ['sh', '-c', 'while [ $(curl -m 1 -s -o /dev/null -w "%{http_code}" http://longhorn-backend:9500/v1) != "200" ]; do echo waiting; sleep 2; done'] containers: - name: longhorn-driver-deployer - image: rancher/longhorn-manager:v0.5.0-rc2 + image: rancher/longhorn-manager:v0.5.0-rc3 imagePullPolicy: Always command: - longhorn-manager - -d - deploy-driver - --manager-image - - rancher/longhorn-manager:v0.5.0-rc2 + - rancher/longhorn-manager:v0.5.0-rc3 - --manager-url - http://longhorn-backend:9500/v1 # manually choose "flexvolume" or "csi" diff --git a/uninstall/uninstall.yaml b/uninstall/uninstall.yaml index 2e0200c..4590bec 100644 --- a/uninstall/uninstall.yaml +++ b/uninstall/uninstall.yaml @@ -53,7 +53,7 @@ spec: spec: containers: - name: longhorn-uninstall - image: rancher/longhorn-manager:v0.5.0-rc2 + image: rancher/longhorn-manager:v0.5.0-rc3 imagePullPolicy: Always command: - longhorn-manager From f8e5a42cfb4113828b409fbf5e6a867f88d03fbb Mon Sep 17 00:00:00 2001 From: Sheng Yang Date: Fri, 17 May 2019 22:50:09 -0700 Subject: [PATCH 19/22] Update chart to v0.5.0-rc3 --- chart/Chart.yaml | 4 ++-- chart/values.yaml | 6 +++--- 2 files changed, 5 insertions(+), 5 deletions(-) diff --git a/chart/Chart.yaml b/chart/Chart.yaml index da6e1ac..0cc9618 100644 --- a/chart/Chart.yaml +++ b/chart/Chart.yaml @@ -1,7 +1,7 @@ apiVersion: v1 name: longhorn -version: 0.5.0-rc2 -appVersion: v0.5.0-rc2 +version: 0.5.0-rc3 +appVersion: v0.5.0-rc3 kubeVersion: ">=v1.8.0-r0" description: Longhorn is a distributed block storage system for Kubernetes powered by Rancher Labs. keywords: diff --git a/chart/values.yaml b/chart/values.yaml index 9a8abf4..c801039 100644 --- a/chart/values.yaml +++ b/chart/values.yaml @@ -4,11 +4,11 @@ image: longhorn: engine: rancher/longhorn-engine - engineTag: v0.5.0-rc2 + engineTag: v0.5.0-rc3 manager: rancher/longhorn-manager - managerTag: v0.5.0-rc2 + managerTag: v0.5.0-rc3 ui: rancher/longhorn-ui - uiTag: v0.5.0-rc2 + uiTag: v0.5.0-rc3 pullPolicy: IfNotPresent service: From ffeac6836b93fe2df50f4cd0fa365a6d4aedee9b Mon Sep 17 00:00:00 2001 From: Sheng Yang Date: Sat, 18 May 2019 12:06:05 -0700 Subject: [PATCH 20/22] Sync with manager: commit 7d4c3fb00e259e98fe2fd8f9a69976e541884a9a Author: Sheng Yang Date: Sat May 18 11:57:06 2019 -0700 Longhorn v0.5.0 release --- deploy/longhorn.yaml | 14 +++++++------- uninstall/uninstall.yaml | 2 +- 2 files changed, 8 insertions(+), 8 deletions(-) diff --git a/deploy/longhorn.yaml b/deploy/longhorn.yaml index 906da59..dd7c602 100644 --- a/deploy/longhorn.yaml +++ b/deploy/longhorn.yaml @@ -181,7 +181,7 @@ spec: spec: containers: - name: longhorn-manager - image: rancher/longhorn-manager:v0.5.0-rc3 + image: rancher/longhorn-manager:v0.5.0 imagePullPolicy: Always securityContext: privileged: true @@ -190,9 +190,9 @@ spec: - -d - daemon - --engine-image - - rancher/longhorn-engine:v0.5.0-rc3 + - rancher/longhorn-engine:v0.5.0 - --manager-image - - rancher/longhorn-manager:v0.5.0-rc3 + - rancher/longhorn-manager:v0.5.0 - --service-account - longhorn-service-account ports: @@ -269,7 +269,7 @@ spec: spec: containers: - name: longhorn-ui - image: rancher/longhorn-ui:v0.5.0-rc3 + image: rancher/longhorn-ui:v0.5.0 ports: - containerPort: 8000 env: @@ -308,18 +308,18 @@ spec: spec: initContainers: - name: wait-longhorn-manager - image: rancher/longhorn-manager:v0.5.0-rc3 + image: rancher/longhorn-manager:v0.5.0 command: ['sh', '-c', 'while [ $(curl -m 1 -s -o /dev/null -w "%{http_code}" http://longhorn-backend:9500/v1) != "200" ]; do echo waiting; sleep 2; done'] containers: - name: longhorn-driver-deployer - image: rancher/longhorn-manager:v0.5.0-rc3 + image: rancher/longhorn-manager:v0.5.0 imagePullPolicy: Always command: - longhorn-manager - -d - deploy-driver - --manager-image - - rancher/longhorn-manager:v0.5.0-rc3 + - rancher/longhorn-manager:v0.5.0 - --manager-url - http://longhorn-backend:9500/v1 # manually choose "flexvolume" or "csi" diff --git a/uninstall/uninstall.yaml b/uninstall/uninstall.yaml index 4590bec..8812090 100644 --- a/uninstall/uninstall.yaml +++ b/uninstall/uninstall.yaml @@ -53,7 +53,7 @@ spec: spec: containers: - name: longhorn-uninstall - image: rancher/longhorn-manager:v0.5.0-rc3 + image: rancher/longhorn-manager:v0.5.0 imagePullPolicy: Always command: - longhorn-manager From 3d6e477407783bcd090834be74f36a8fbc3c8ae7 Mon Sep 17 00:00:00 2001 From: Sheng Yang Date: Sat, 18 May 2019 12:07:10 -0700 Subject: [PATCH 21/22] Update chart to v0.5.0 --- chart/Chart.yaml | 4 ++-- chart/values.yaml | 6 +++--- 2 files changed, 5 insertions(+), 5 deletions(-) diff --git a/chart/Chart.yaml b/chart/Chart.yaml index 0cc9618..2903fa3 100644 --- a/chart/Chart.yaml +++ b/chart/Chart.yaml @@ -1,7 +1,7 @@ apiVersion: v1 name: longhorn -version: 0.5.0-rc3 -appVersion: v0.5.0-rc3 +version: 0.5.0 +appVersion: v0.5.0 kubeVersion: ">=v1.8.0-r0" description: Longhorn is a distributed block storage system for Kubernetes powered by Rancher Labs. keywords: diff --git a/chart/values.yaml b/chart/values.yaml index c801039..a84ddbb 100644 --- a/chart/values.yaml +++ b/chart/values.yaml @@ -4,11 +4,11 @@ image: longhorn: engine: rancher/longhorn-engine - engineTag: v0.5.0-rc3 + engineTag: v0.5.0 manager: rancher/longhorn-manager - managerTag: v0.5.0-rc3 + managerTag: v0.5.0 ui: rancher/longhorn-ui - uiTag: v0.5.0-rc3 + uiTag: v0.5.0 pullPolicy: IfNotPresent service: From f0fd037edaeed9d6a1851577a75418b90020008f Mon Sep 17 00:00:00 2001 From: Sheng Yang Date: Sat, 18 May 2019 12:09:06 -0700 Subject: [PATCH 22/22] Longhorn v0.5.0 release Highlights: 1. Users now can use Disaster Recovery Volume support (#495 ) to recover the volume in another Kubernetes cluster with defined RTO and RPO. See [here](https://github.com/rancher/longhorn/blob/v0.5.0/docs/dr-volume.md) for details 2. Users now can see Kubernetes workload information and create PV/PVC in Longhorn UI (#461 ) .See [here](https://github.com/rancher/longhorn/blob/v0.5.0/docs/k8s-workload.md) for details 3. Users now can set backup scheduling in the storage class (#362) 4. We now add the Helm chart in the Longhorn repo, in addition to Rancher Apps. (#445 ) See all the issues resolved in v0.5.0 at: https://github.com/rancher/longhorn/milestone/3?closed=1 The volume engines would need to upgrade to v0.5.0 as well. Please follow the instruction to upgrade engine. --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index a745512..bc71566 100644 --- a/README.md +++ b/README.md @@ -19,7 +19,7 @@ You can read more technical details of Longhorn [here](http://rancher.com/micros Longhorn is alpha-quality software. We appreciate your willingness to deploy Longhorn and provide feedback. -The latest release of Longhorn is **v0.4.1**. +The latest release of Longhorn is **v0.5.0**. ## Source code Longhorn is 100% open source software. Project source code is spread across a number of repos: