Merge pull request #561 from rancher/v0.5.0

Longhorn v0.5.0 release
This commit is contained in:
Sheng Yang 2019-05-18 12:44:41 -07:00 committed by GitHub
commit 097dc50bbf
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
27 changed files with 1085 additions and 27 deletions

View File

@ -19,7 +19,7 @@ You can read more technical details of Longhorn [here](http://rancher.com/micros
Longhorn is alpha-quality software. We appreciate your willingness to deploy Longhorn and provide feedback. Longhorn is alpha-quality software. We appreciate your willingness to deploy Longhorn and provide feedback.
The latest release of Longhorn is **v0.4.1**. The latest release of Longhorn is **v0.5.0**.
## Source code ## Source code
Longhorn is 100% open source software. Project source code is spread across a number of repos: Longhorn is 100% open source software. Project source code is spread across a number of repos:
@ -56,14 +56,29 @@ If there is a new version of Longhorn available, you will see an `Upgrade Availa
## On any Kubernetes cluster ## On any Kubernetes cluster
### Install Longhorn with kubectl
You can install Longhorn on any Kubernetes cluster using following command: You can install Longhorn on any Kubernetes cluster using following command:
``` ```
kubectl apply -f https://raw.githubusercontent.com/rancher/longhorn/master/deploy/longhorn.yaml kubectl apply -f https://raw.githubusercontent.com/rancher/longhorn/master/deploy/longhorn.yaml
``` ```
Google Kubernetes Engine (GKE) requires additional setup in order for Longhorn to function properly. If your are a GKE user, read [this page](docs/gke.md) before proceeding. Google Kubernetes Engine (GKE) requires additional setup in order for Longhorn to function properly. If your are a GKE user, read [this page](docs/gke.md) before proceeding.
### Install Longhorn with Helm
First, you need to initialize Helm locally and [install Tiller into your Kubernetes cluster with RBAC](https://helm.sh/docs/using_helm/#role-based-access-control).
Then download Longhorn repository:
```
git clone https://github.com/rancher/longhorn.git
```
Now using following command to install Longhorn:
```
helm install ./longhorn/chart --name longhorn --namespace longhorn-system
```
---
Longhorn will be installed in the namespace `longhorn-system` Longhorn will be installed in the namespace `longhorn-system`
One of the two available drivers (CSI and Flexvolume) would be chosen automatically based on the version of Kubernetes you use. See [here](docs/driver.md) for details. One of the two available drivers (CSI and Flexvolume) would be chosen automatically based on the version of Kubernetes you use. See [here](docs/driver.md) for details.
@ -108,8 +123,23 @@ Since v0.3.3, Longhorn is able to perform fully-automated non-disruptive upgrade
If you're upgrading from Longhorn v0.3.0 or newer: If you're upgrading from Longhorn v0.3.0 or newer:
1. Follow [the same steps for installation](#install) to upgrade Longhorn manager ## Upgrade Longhorn manager
2. After upgraded manager, follow [the steps here](docs/upgrade.md#upgrade-longhorn-engine) to upgrade Longhorn engine for existing volumes.
##### On Kubernetes clusters Managed by Rancher 2.1 or newer
Follow [the same steps for installation](#install) to upgrade Longhorn manager
##### Using kubectl
```
kubectl apply -f https://raw.githubusercontent.com/rancher/longhorn/master/deploy/longhorn.yaml`
```
##### Using Helm
```
helm upgrade longhorn ./longhorn/chart
```
## Upgrade Longhorn engine
After upgraded manager, follow [the steps here](docs/upgrade.md#upgrade-longhorn-engine) to upgrade Longhorn engine for existing volumes.
1. For non distruptive upgrade, follow [the live upgrade steps here](./docs/upgrade.md#live-upgrade) 1. For non distruptive upgrade, follow [the live upgrade steps here](./docs/upgrade.md#live-upgrade)
For more details about upgrade in Longhorn or upgrade from older versions, [see here](docs/upgrade.md). For more details about upgrade in Longhorn or upgrade from older versions, [see here](docs/upgrade.md).
@ -174,12 +204,14 @@ More examples are available at `./examples/`
### [Multiple disks, including how to change the default path for storage](./docs/multidisk.md) ### [Multiple disks, including how to change the default path for storage](./docs/multidisk.md)
### [iSCSI](./docs/iscsi.md) ### [iSCSI](./docs/iscsi.md)
### [Base image](./docs/base-image.md) ### [Base image](./docs/base-image.md)
### [Kubernetes workload in Longhorn UI](./docs/k8s-workload.md)
### [Restoring Stateful Set volumes](./docs/restore_statefulset.md) ### [Restoring Stateful Set volumes](./docs/restore_statefulset.md)
### [Google Kubernetes Engine](./docs/gke.md) ### [Google Kubernetes Engine](./docs/gke.md)
### [Deal with Kubernetes node failure](./docs/node-failure.md) ### [Deal with Kubernetes node failure](./docs/node-failure.md)
### [Use CSI driver on RancherOS/CoreOS + RKE or K3S](./docs/csi-config.md) ### [Use CSI driver on RancherOS/CoreOS + RKE or K3S](./docs/csi-config.md)
### [Restore a backup to an image file](./docs/restore-to-file.md) ### [Restore a backup to an image file](./docs/restore-to-file.md)
### [Disaster Recovery Volume](./docs/dr-volume.md)
# Troubleshooting # Troubleshooting
You can click `Generate Support Bundle` link at the bottom of the UI to download a zip file contains Longhorn related configuration and logs. You can click `Generate Support Bundle` link at the bottom of the UI to download a zip file contains Longhorn related configuration and logs.
@ -188,30 +220,44 @@ See [here](./docs/troubleshooting.md) for the troubleshooting guide.
# Uninstall Longhorn # Uninstall Longhorn
### Using kubectl
1. To prevent damaging the Kubernetes cluster, we recommend deleting all Kubernetes workloads using Longhorn volumes (PersistentVolume, PersistentVolumeClaim, StorageClass, Deployment, StatefulSet, DaemonSet, etc) first. 1. To prevent damaging the Kubernetes cluster, we recommend deleting all Kubernetes workloads using Longhorn volumes (PersistentVolume, PersistentVolumeClaim, StorageClass, Deployment, StatefulSet, DaemonSet, etc) first.
2. Create the uninstallation job to clean up CRDs from the system and wait for success: 2. Create the uninstallation job to clean up CRDs from the system and wait for success:
``` ```
kubectl create -f https://raw.githubusercontent.com/rancher/longhorn/master/uninstall/uninstall.yaml kubectl create -f https://raw.githubusercontent.com/rancher/longhorn/master/uninstall/uninstall.yaml
kubectl -n longhorn-system get job/longhorn-uninstall -w kubectl get job/longhorn-uninstall -w
``` ```
Example output: Example output:
``` ```
$ kubectl create -f https://raw.githubusercontent.com/rancher/longhorn/master/uninstall/uninstall.yaml $ kubectl create -f https://raw.githubusercontent.com/rancher/longhorn/master/uninstall/uninstall.yaml
serviceaccount/longhorn-uninstall-service-account created
clusterrole.rbac.authorization.k8s.io/longhorn-uninstall-role created
clusterrolebinding.rbac.authorization.k8s.io/longhorn-uninstall-bind created
job.batch/longhorn-uninstall created job.batch/longhorn-uninstall created
$ kubectl -n longhorn-system get job/longhorn-uninstall -w
NAME DESIRED SUCCESSFUL AGE $ kubectl get job/longhorn-uninstall -w
longhorn-uninstall 1 0 3s NAME COMPLETIONS DURATION AGE
longhorn-uninstall 1 1 45s longhorn-uninstall 0/1 3s 3s
longhorn-uninstall 1/1 20s 20s
^C ^C
``` ```
3. Remove remaining components: 3. Remove remaining components:
``` ```
kubectl delete -f https://raw.githubusercontent.com/rancher/longhorn/master/deploy/longhorn.yaml kubectl delete -f https://raw.githubusercontent.com/rancher/longhorn/master/deploy/longhorn.yaml
kubectl delete -f https://raw.githubusercontent.com/rancher/longhorn/master/uninstall/uninstall.yaml
``` ```
Tip: If you try `kubectl delete -f https://raw.githubusercontent.com/rancher/longhorn/master/deploy/longhorn.yaml` first and get stuck there,
pressing `Ctrl C` then running `kubectl create -f https://raw.githubusercontent.com/rancher/longhorn/master/uninstall/uninstall.yaml` can also help you remove Longhorn. Finally, don't forget to cleanup remaining components.
### Using Helm
```
helm delete longhorn --purge
```
## License ## License
Copyright (c) 2014-2019 [Rancher Labs, Inc.](http://rancher.com/) Copyright (c) 2014-2019 [Rancher Labs, Inc.](http://rancher.com/)

21
chart/.helmignore Normal file
View File

@ -0,0 +1,21 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj

24
chart/Chart.yaml Normal file
View File

@ -0,0 +1,24 @@
apiVersion: v1
name: longhorn
version: 0.5.0
appVersion: v0.5.0
kubeVersion: ">=v1.8.0-r0"
description: Longhorn is a distributed block storage system for Kubernetes powered by Rancher Labs.
keywords:
- longhorn
- storage
- distributed
- block
- device
- iscsi
home: https://github.com/rancher/longhorn
sources:
- https://github.com/rancher/longhorn
- https://github.com/rancher/longhorn-engine
- https://github.com/rancher/longhorn-manager
- https://github.com/rancher/longhorn-ui
- https://github.com/rancher/longhorn-tests
maintainers:
- name: rancher
email: charts@rancher.com
icon: https://s3.us-east-2.amazonaws.com/longhorn-assets/longhorn-logo.svg

57
chart/README.md Normal file
View File

@ -0,0 +1,57 @@
# Rancher Longhorn Chart
The following document pertains to running Longhorn from the Rancher 2.0 chart.
## Source Code
Longhorn is 100% open source software. Project source code is spread across a number of repos:
1. Longhorn Engine -- Core controller/replica logic https://github.com/rancher/longhorn-engine
2. Longhorn Manager -- Longhorn orchestration, includes Flexvolume driver for Kubernetes https://github.com/rancher/longhorn-manager
3. Longhorn UI -- Dashboard https://github.com/rancher/longhorn-ui
## Prerequisites
1. Rancher v2.1+
2. Docker v1.13+
3. Kubernetes v1.8+ cluster with 1 or more nodes and Mount Propagation feature enabled. If your Kubernetes cluster was provisioned by Rancher v2.0.7+ or later, MountPropagation feature is enabled by default. [Check your Kubernetes environment now](https://github.com/rancher/longhorn#environment-check-script). If MountPropagation is disabled, the Kubernetes Flexvolume driver will be deployed instead of the default CSI driver. Base Image feature will also be disabled if MountPropagation is disabled.
4. Make sure `curl`, `findmnt`, `grep`, `awk` and `blkid` has been installed in all nodes of the Kubernetes cluster.
5. Make sure `open-iscsi` has been installed in all nodes of the Kubernetes cluster. For GKE, recommended Ubuntu as guest OS image since it contains `open-iscsi` already.
## Uninstallation
1. To prevent damage to the Kubernetes cluster, we recommend deleting all Kubernetes workloads using Longhorn volumes (PersistentVolume, PersistentVolumeClaim, StorageClass, Deployment, StatefulSet, DaemonSet, etc).
2. From Rancher UI, navigate to `Catalog Apps` tab and delete Longhorn app.
## Troubleshooting
### I deleted the Longhorn App from Rancher UI instead of following the uninstallation procedure
Redeploy the (same version) Longhorn App. Follow the uninstallation procedure above.
### Problems with CRDs
If your CRD instances or the CRDs themselves can't be deleted for whatever reason, run the commands below to clean up. Caution: this will wipe all Longhorn state!
```
# Delete CRD finalizers, instances and definitions
for crd in $(kubectl get crd -o jsonpath={.items[*].metadata.name} | tr ' ' '\n' | grep longhorn.rancher.io); do
kubectl -n ${NAMESPACE} get $crd -o yaml | sed "s/\- longhorn.rancher.io//g" | kubectl apply -f -
kubectl -n ${NAMESPACE} delete $crd --all
kubectl delete crd/$crd
done
```
### Volume can be attached/detached from UI, but Kubernetes Pod/StatefulSet etc cannot use it
Check if volume plugin directory has been set correctly. This is automatically detected unless user explicitly set it.
By default, Kubernetes uses `/usr/libexec/kubernetes/kubelet-plugins/volume/exec/`, as stated in the [official document](https://github.com/kubernetes/community/blob/master/contributors/devel/flexvolume.md#prerequisites).
Some vendors choose to change the directory for various reasons. For example, GKE uses `/home/kubernetes/flexvolume` instead.
User can find the correct directory by running `ps aux|grep kubelet` on the host and check the `--volume-plugin-dir` parameter. If there is none, the default `/usr/libexec/kubernetes/kubelet-plugins/volume/exec/` will be used.
---
Please see [link](https://github.com/rancher/longhorn) for more information.

7
chart/app-readme.md Normal file
View File

@ -0,0 +1,7 @@
# Longhorn
Longhorn is a lightweight, reliable and easy to use distributed block storage system for Kubernetes. Once deployed, users can leverage persistent volumes provided by Longhorn.
Longhorn creates a dedicated storage controller for each volume and synchronously replicates the volume across multiple replicas stored on multiple nodes. The storage controller and replicas are themselves orchestrated using Kubernetes. Longhorn supports snapshots, backups and even allows you to schedule recurring snapshots and backups!
[Chart Documentation](https://github.com/rancher/longhorn/blob/master/docs/chart.md)

109
chart/questions.yml Normal file
View File

@ -0,0 +1,109 @@
categories:
- storage
labels:
io.rancher.certified: experimental
namespace: longhorn-system
questions:
- variable: driver
default: csi
description: "Deploy either the CSI or FlexVolume driver. CSI is newer but requires MountPropagation, a feature enabled by default in Kubernetes v1.10 and later"
type: enum
options:
- csi
- flexvolume
label: Longhorn Kubernetes Driver
group: "Longhorn Settings"
show_subquestion_if: flexvolume
subquestions:
- variable: persistence.flexvolumePath
default: ""
description: "Leave blank to autodetect. For RKE, use `/var/lib/kubelet/volumeplugins`. For GKE, use `/home/kubernetes/flexvolume/` instead. Users can find the correct directory by running `ps aux|grep kubelet` on the host and check the --volume-plugin-dir parameter. If there is none, the default `/usr/libexec/kubernetes/kubelet-plugins/volume/exec/` will be used."
type: string
label: Longhorn Flexvolume Path
show_subquestion_if: csi
subquestions:
- variable: csi.attacherImage
default:
description: "Specify CSI attacher image. Leave blank to autodetect."
type: string
label: Longhorn CSI Attacher Image
- variable: csi.provisionerImage
default:
description: "Specify CSI provisioner image. Leave blank to autodetect."
type: string
label: Longhorn CSI Provisioner Image
- variable: csi.driverRegistrarImage
default:
description: "Specify CSI Driver Registrar image. Leave blank to autodetect."
type: string
label: Longhorn CSI Driver Registrar Image
- variable: csi.kubeletRootDir
default:
description: "Specify kubelet root-dir. Leave blank to autodetect."
type: string
label: Kubelet Root Directory
- variable: csi.attacherReplicaCount
type: int
default:
min: 1
max: 10
description: "Specify number of CSI Attacher replica. By default 3."
label: Longhorn CSI Attacher replica count
- variable: csi.provisionerReplicaCount
type: int
default:
min: 1
max: 10
description: "Specify number of CSI Provisioner replica. By default 3."
label: Longhorn CSI Provisioner replica count
- variable: persistence.defaultClass
default: "true"
description: "Set as default StorageClass"
group: "Longhorn Settings"
type: boolean
required: true
label: Default Storage Class
- variable: persistence.defaultClassReplicaCount
description: "Set replica count for default StorageClass"
group: "Longhorn Settings"
type: int
default: 3
min: 1
max: 10
label: Default Storage Class Replica Count
- variable: ingress.enabled
default: "false"
description: "Expose app using Layer 7 Load Balancer - ingress"
type: boolean
group: "Services and Load Balancing"
label: Expose app using Layer 7 Load Balancer
show_subquestion_if: true
subquestions:
- variable: ingress.host
default: "xip.io"
description: "layer 7 Load Balancer hostname"
type: hostname
required: true
label: Layer 7 Load Balancer Hostname
- variable: service.ui.type
default: "Rancher-Proxy"
description: "Define Longhorn UI service type"
type: enum
options:
- "ClusterIP"
- "NodePort"
- "LoadBalancer"
- "Rancher-Proxy"
label: Longhorn UI Service
show_if: "ingress.enabled=false"
group: "Services and Load Balancing"
show_subquestion_if: "NodePort"
subquestions:
- variable: service.ui.nodePort
default: ""
description: "NodePort port number(to set explicitly, choose port between 30000-32767)"
type: int
min: 30000
max: 32767
show_if: "service.ui.type=NodePort||service.ui.type=LoadBalancer"
label: UI Service NodePort number

View File

@ -0,0 +1,2 @@
1. Get the application URL by running these commands:
kubectl get po -n $release_namespace

View File

@ -0,0 +1,22 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "longhorn.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
*/}}
{{- define "longhorn.fullname" -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- define "longhorn.managerIP" -}}
{{- $fullname := (include "longhorn.fullname" .) -}}
{{- printf "http://%s-backend:9500" $fullname | trunc 63 | trimSuffix "-" -}}
{{- end -}}

View File

@ -0,0 +1,32 @@
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: longhorn-role
rules:
- apiGroups:
- apiextensions.k8s.io
resources:
- customresourcedefinitions
verbs:
- "*"
- apiGroups: [""]
resources: ["pods", "events", "persistentvolumes", "persistentvolumeclaims", "nodes", "proxy/nodes", "pods/log", "secrets", "services", "endpoints", "configmaps"]
verbs: ["*"]
- apiGroups: [""]
resources: ["namespaces"]
verbs: ["get", "list"]
- apiGroups: ["apps"]
resources: ["daemonsets", "statefulsets", "deployments"]
verbs: ["*"]
- apiGroups: ["batch"]
resources: ["jobs", "cronjobs"]
verbs: ["*"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses", "volumeattachments"]
verbs: ["*"]
- apiGroups: ["csi.storage.k8s.io"]
resources: ["csinodeinfos"]
verbs: ["get", "list", "watch"]
- apiGroups: ["longhorn.rancher.io"]
resources: ["volumes", "engines", "replicas", "settings", "engineimages", "nodes"]
verbs: ["*"]

View File

@ -0,0 +1,12 @@
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: longhorn-bind
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: longhorn-role
subjects:
- kind: ServiceAccount
name: longhorn-service-account
namespace: {{ .Release.Namespace }}

107
chart/templates/crds.yaml Normal file
View File

@ -0,0 +1,107 @@
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
labels:
longhorn-manager: Engine
name: engines.longhorn.rancher.io
spec:
group: longhorn.rancher.io
names:
kind: Engine
listKind: EngineList
plural: engines
shortNames:
- lhe
singular: engine
scope: Namespaced
version: v1alpha1
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
labels:
longhorn-manager: Replica
name: replicas.longhorn.rancher.io
spec:
group: longhorn.rancher.io
names:
kind: Replica
listKind: ReplicaList
plural: replicas
shortNames:
- lhr
singular: replica
scope: Namespaced
version: v1alpha1
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
labels:
longhorn-manager: Setting
name: settings.longhorn.rancher.io
spec:
group: longhorn.rancher.io
names:
kind: Setting
listKind: SettingList
plural: settings
shortNames:
- lhs
singular: setting
scope: Namespaced
version: v1alpha1
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
labels:
longhorn-manager: Volume
name: volumes.longhorn.rancher.io
spec:
group: longhorn.rancher.io
names:
kind: Volume
listKind: VolumeList
plural: volumes
shortNames:
- lhv
singular: volume
scope: Namespaced
version: v1alpha1
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
labels:
longhorn-manager: EngineImage
name: engineimages.longhorn.rancher.io
spec:
group: longhorn.rancher.io
names:
kind: EngineImage
listKind: EngineImageList
plural: engineimages
shortNames:
- lhei
singular: engineimage
scope: Namespaced
version: v1alpha1
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
labels:
longhorn-manager: Node
name: nodes.longhorn.rancher.io
spec:
group: longhorn.rancher.io
names:
kind: Node
listKind: NodeList
plural: nodes
shortNames:
- lhn
singular: node
scope: Namespaced
version: v1alpha1

View File

@ -0,0 +1,94 @@
apiVersion: apps/v1beta2
kind: DaemonSet
metadata:
labels:
app: longhorn-manager
name: longhorn-manager
namespace: {{ .Release.Namespace }}
spec:
selector:
matchLabels:
app: longhorn-manager
template:
metadata:
labels:
app: longhorn-manager
spec:
containers:
- name: longhorn-manager
image: "{{ .Values.image.longhorn.manager }}:{{ .Values.image.longhorn.managerTag }}"
imagePullPolicy: Always
securityContext:
privileged: true
command:
- longhorn-manager
- -d
- daemon
- --engine-image
- "{{ .Values.image.longhorn.engine }}:{{ .Values.image.longhorn.engineTag }}"
- --manager-image
- "{{ .Values.image.longhorn.manager }}:{{ .Values.image.longhorn.managerTag }}"
- --service-account
- longhorn-service-account
ports:
- containerPort: 9500
name: manager
volumeMounts:
- name: dev
mountPath: /host/dev/
- name: proc
mountPath: /host/proc/
- name: varrun
mountPath: /var/run/
- name: longhorn
mountPath: /var/lib/rancher/longhorn/
mountPropagation: Bidirectional
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: LONGHORN_BACKEND_SVC
value: longhorn-backend
volumes:
- name: dev
hostPath:
path: /dev/
- name: proc
hostPath:
path: /proc/
- name: varrun
hostPath:
path: /var/run/
- name: longhorn
hostPath:
path: /var/lib/rancher/longhorn/
serviceAccountName: longhorn-service-account
---
apiVersion: v1
kind: Service
metadata:
labels:
app: longhorn-manager
name: longhorn-backend
namespace: {{ .Release.Namespace }}
spec:
type: {{ .Values.service.manager.type }}
sessionAffinity: ClientIP
selector:
app: longhorn-manager
ports:
- name: manager
port: 9500
targetPort: manager
{{- if .Values.service.manager.nodePort }}
nodePort: {{ .Values.service.manager.nodePort }}
{{- end }}

View File

@ -0,0 +1,73 @@
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: longhorn-driver-deployer
namespace: {{ .Release.Namespace }}
spec:
replicas: 1
selector:
matchLabels:
app: longhorn-driver-deployer
template:
metadata:
labels:
app: longhorn-driver-deployer
spec:
initContainers:
- name: wait-longhorn-manager
image: "{{ .Values.image.longhorn.manager }}:{{ .Values.image.longhorn.managerTag }}"
command: ['sh', '-c', 'while [ $(curl -m 1 -s -o /dev/null -w "%{http_code}" http://longhorn-backend:9500/v1) != "200" ]; do echo waiting; sleep 2; done']
containers:
- name: longhorn-driver-deployer
image: "{{ .Values.image.longhorn.manager }}:{{ .Values.image.longhorn.managerTag }}"
imagePullPolicy: Always
command:
- longhorn-manager
- -d
- deploy-driver
- --manager-image
- "{{ .Values.image.longhorn.manager }}:{{ .Values.image.longhorn.managerTag }}"
- --manager-url
- http://longhorn-backend:9500/v1
- --driver
- "{{ .Values.driver }}"
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: SERVICE_ACCOUNT
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
- name: FLEXVOLUME_DIR
value: {{ .Values.persistence.flexvolumePath }}
{{- if .Values.csi.kubeletRootDir }}
- name: KUBELET_ROOT_DIR
value: {{ .Values.csi.kubeletRootDir }}
{{- end }}
{{- if .Values.csi.attacherImage }}
- name: CSI_ATTACHER_IMAGE
value: {{ .Values.csi.attacherImage }}
{{- end }}
{{- if .Values.csi.provisionerImage }}
- name: CSI_PROVISIONER_IMAGE
value: {{ .Values.csi.provisionerImage }}
{{- end }}
{{- if .Values.csi.driverRegistrarImage }}
- name: CSI_DRIVER_REGISTRAR_IMAGE
value: {{ .Values.csi.driverRegistrarImage }}
{{- end }}
{{- if .Values.csi.attacherReplicaCount }}
- name: CSI_ATTACHER_REPLICA_COUNT
value: "{{ .Values.csi.attacherReplicaCount }}"
{{- end }}
{{- if .Values.csi.provisionerReplicaCount }}
- name: CSI_PROVISIONER_REPLICA_COUNT
value: "{{ .Values.csi.provisionerReplicaCount }}"
{{- end }}
serviceAccountName: longhorn-service-account

View File

@ -0,0 +1,52 @@
apiVersion: apps/v1beta2
kind: Deployment
metadata:
labels:
app: longhorn-ui
name: longhorn-ui
namespace: {{ .Release.Namespace }}
spec:
replicas: 1
selector:
matchLabels:
app: longhorn-ui
template:
metadata:
labels:
app: longhorn-ui
spec:
containers:
- name: longhorn-ui
image: "{{ .Values.image.longhorn.ui }}:{{ .Values.image.longhorn.uiTag }}"
ports:
- containerPort: 8000
name: http
env:
- name: LONGHORN_MANAGER_IP
value: "http://longhorn-backend:9500"
---
kind: Service
apiVersion: v1
metadata:
labels:
app: longhorn-ui
{{- if eq .Values.service.ui.type "Rancher-Proxy" }}
kubernetes.io/cluster-service: "true"
{{- end }}
name: longhorn-frontend
namespace: {{ .Release.Namespace }}
spec:
{{- if eq .Values.service.ui.type "Rancher-Proxy" }}
type: ClusterIP
{{- else }}
type: {{ .Values.service.ui.type }}
{{- end }}
selector:
app: longhorn-ui
ports:
- name: http
port: 80
targetPort: http
{{- if .Values.service.ui.nodePort }}
nodePort: {{ .Values.service.ui.nodePort }}
{{- end }}

View File

@ -0,0 +1,30 @@
{{- if .Values.ingress.enabled }}
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: longhorn-ingress
labels:
app: longhorn-ingress
annotations:
{{- if .Values.ingress.tls }}
ingress.kubernetes.io/secure-backends: "true"
{{- end }}
{{- range $key, $value := .Values.ingress.annotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
spec:
rules:
- host: {{ .Values.ingress.host }}
http:
paths:
- path: {{ default "" .Values.ingress.path }}
backend:
serviceName: longhorn-frontend
servicePort: 80
{{- if .Values.ingress.tls }}
tls:
- hosts:
- {{ .Values.ingress.host }}
secretName: {{ .Values.ingress.tlsSecret }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,31 @@
apiVersion: batch/v1
kind: Job
metadata:
annotations:
"helm.sh/hook": post-upgrade
"helm.sh/hook-delete-policy": hook-succeeded
name: longhorn-post-upgrade
namespace: {{ .Release.Namespace }}
spec:
activeDeadlineSeconds: 900
backoffLimit: 1
template:
metadata:
name: longhorn-post-upgrade
spec:
containers:
- name: longhorn-post-upgrade
image: "{{ .Values.image.longhorn.manager }}:{{ .Values.image.longhorn.managerTag }}"
imagePullPolicy: Always
command:
- longhorn-manager
- post-upgrade
- --from-version
- 0.0.1
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
restartPolicy: OnFailure
serviceAccountName: longhorn-service-account

View File

@ -0,0 +1,5 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: longhorn-service-account
namespace: {{ .Release.Namespace }}

View File

@ -0,0 +1,17 @@
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: longhorn
{{- if .Values.persistence.defaultClass }}
annotations:
storageclass.beta.kubernetes.io/is-default-class: "true"
{{- else }}
annotations:
storageclass.beta.kubernetes.io/is-default-class: "false"
{{- end }}
provisioner: rancher.io/longhorn
parameters:
numberOfReplicas: "{{ .Values.persistence.defaultClassReplicaCount }}"
staleReplicaTimeout: "30"
fromBackup: ""
baseImage: ""

View File

@ -0,0 +1,15 @@
{{- if .Values.ingress.enabled }}
{{- range .Values.ingress.secrets }}
apiVersion: v1
kind: Secret
metadata:
name: longhorn
labels:
app: longhorn
type: kubernetes.io/tls
data:
tls.crt: {{ .certificate | b64enc }}
tls.key: {{ .key | b64enc }}
---
{{- end }}
{{- end }}

View File

@ -0,0 +1,30 @@
apiVersion: batch/v1
kind: Job
metadata:
annotations:
"helm.sh/hook": pre-delete
"helm.sh/hook-delete-policy": hook-succeeded
name: longhorn-uninstall
namespace: {{ .Release.Namespace }}
spec:
activeDeadlineSeconds: 900
backoffLimit: 1
template:
metadata:
name: longhorn-uninstall
spec:
containers:
- name: longhorn-uninstall
image: "{{ .Values.image.longhorn.manager }}:{{ .Values.image.longhorn.managerTag }}"
imagePullPolicy: Always
command:
- longhorn-manager
- uninstall
- --force
env:
- name: LONGHORN_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
restartPolicy: OnFailure
serviceAccountName: longhorn-service-account

91
chart/values.yaml Normal file
View File

@ -0,0 +1,91 @@
# Default values for longhorn.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
image:
longhorn:
engine: rancher/longhorn-engine
engineTag: v0.5.0
manager: rancher/longhorn-manager
managerTag: v0.5.0
ui: rancher/longhorn-ui
uiTag: v0.5.0
pullPolicy: IfNotPresent
service:
ui:
type: LoadBalancer
nodePort: ""
manager:
type: ClusterIP
nodePort: ""
# deploy either 'flexvolume' or 'csi' driver
driver: csi
persistence:
# for GKE uses /home/kubernetes/flexvolume/ instead, User can find the correct directory by running ps aux|grep kubelet on the host and check the --volume-plugin-dir parameter.
# If there is none, the default /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ will be used.
flexvolumePath:
defaultClass: true
defaultClassReplicaCount: 3
csi:
attacherImage:
provisionerImage:
driverRegistrarImage:
kubeletRootDir:
attacherReplicaCount:
provisionerReplicaCount:
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
#
ingress:
## Set to true to enable ingress record generation
enabled: false
host: xip.io
## Set this to true in order to enable TLS on the ingress record
## A side effect of this will be that the backend service will be connected at port 443
tls: false
## If TLS is set to true, you must declare what secret will store the key/certificate for TLS
tlsSecret: longhorn.local-tls
## Ingress annotations done as key:value pairs
## If you're using kube-lego, you will want to add:
## kubernetes.io/tls-acme: true
##
## For a full list of possible ingress annotations, please see
## ref: https://github.com/kubernetes/ingress-nginx/blob/master/docs/annotations.md
##
## If tls is set to true, annotation ingress.kubernetes.io/secure-backends: "true" will automatically be set
annotations:
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: true
secrets:
## If you're providing your own certificates, please use this to add the certificates as secrets
## key and certificate should start with -----BEGIN CERTIFICATE----- or
## -----BEGIN RSA PRIVATE KEY-----
##
## name should line up with a tlsSecret set further up
## If you're using kube-lego, this is unneeded, as it will create the secret for you if it is not set
##
## It is also possible to create and manage the certificates outside of this helm chart
## Please see README.md for more information
# - name: longhorn.local-tls
# key:
# certificate:

View File

@ -181,7 +181,7 @@ spec:
spec: spec:
containers: containers:
- name: longhorn-manager - name: longhorn-manager
image: rancher/longhorn-manager:v0.4.1 image: rancher/longhorn-manager:v0.5.0
imagePullPolicy: Always imagePullPolicy: Always
securityContext: securityContext:
privileged: true privileged: true
@ -190,9 +190,9 @@ spec:
- -d - -d
- daemon - daemon
- --engine-image - --engine-image
- rancher/longhorn-engine:v0.4.1 - rancher/longhorn-engine:v0.5.0
- --manager-image - --manager-image
- rancher/longhorn-manager:v0.4.1 - rancher/longhorn-manager:v0.5.0
- --service-account - --service-account
- longhorn-service-account - longhorn-service-account
ports: ports:
@ -269,7 +269,7 @@ spec:
spec: spec:
containers: containers:
- name: longhorn-ui - name: longhorn-ui
image: rancher/longhorn-ui:v0.4.1 image: rancher/longhorn-ui:v0.5.0
ports: ports:
- containerPort: 8000 - containerPort: 8000
env: env:
@ -308,26 +308,35 @@ spec:
spec: spec:
initContainers: initContainers:
- name: wait-longhorn-manager - name: wait-longhorn-manager
image: rancher/longhorn-manager:v0.4.1 image: rancher/longhorn-manager:v0.5.0
command: ['sh', '-c', 'while [ $(curl -m 1 -s -o /dev/null -w "%{http_code}" http://longhorn-backend:9500/v1) != "200" ]; do echo waiting; sleep 2; done'] command: ['sh', '-c', 'while [ $(curl -m 1 -s -o /dev/null -w "%{http_code}" http://longhorn-backend:9500/v1) != "200" ]; do echo waiting; sleep 2; done']
containers: containers:
- name: longhorn-driver-deployer - name: longhorn-driver-deployer
image: rancher/longhorn-manager:v0.4.1 image: rancher/longhorn-manager:v0.5.0
imagePullPolicy: Always imagePullPolicy: Always
command: command:
- longhorn-manager - longhorn-manager
- -d - -d
- deploy-driver - deploy-driver
- --manager-image - --manager-image
- rancher/longhorn-manager:v0.4.1 - rancher/longhorn-manager:v0.5.0
- --manager-url - --manager-url
- http://longhorn-backend:9500/v1 - http://longhorn-backend:9500/v1
# manually choose "flexvolume" or "csi" # manually choose "flexvolume" or "csi"
#- --driver #- --driver
#- flexvolume #- flexvolume
# manually set root directory for flexvolume
#- --flexvolume-dir
#- /usr/libexec/kubernetes/kubelet-plugins/volume/exec/
# manually set root directory for csi # manually set root directory for csi
#- --kubelet-root-dir #- --kubelet-root-dir
#- /var/lib/rancher/k3s/agent/kubelet #- /var/lib/rancher/k3s/agent/kubelet
# manually specify number of CSI attacher replicas
#- --csi-attacher-replica-count
#- "3"
# manually specify number of CSI provisioner replicas
#- --csi-provisioner-replica-count
#- "3"
env: env:
- name: POD_NAMESPACE - name: POD_NAMESPACE
valueFrom: valueFrom:

53
docs/dr-volume.md Normal file
View File

@ -0,0 +1,53 @@
# Disaster Recovery Volume
## What is Disaster Recovery Volume?
To increase the resiliency of the volume, Longhorn supports disaster recovery volume.
The disaster recovery volume is designed for the backup cluster in the case of the whole main cluster goes down.
A disaster recovery volume is normally in standby mode. User would need to activate it before using it as a normal volume.
A disaster recovery volume can be created from a volume's backup in the backup store. And Longhorn will monitor its
original backup volume and incrementally restore from the latest backup. Once the original volume in the main cluster goes
down and users decide to activate the disaster recovery volume in the backup cluster, the disaster recovery volume can be
activated immediately in the most condition, so it will greatly reduced the time needed to restore the data from the
backup store to the volume in the backup cluster.
## How to create Disaster Recovery Volume?
1. In the cluster A, make sure the original volume X has backup created or recurring backup scheduling.
2. Set backup target in cluster B to be same as cluster A's.
3. In backup page of cluster B, choose the backup volume X then create disaster recovery volume Y. It's highly recommended
to use backup volume name as disaster volume name.
4. Attach the disaster recovery volume Y to any node. Then Longhorn will automatically polling for the last backup of the
volume X, and incrementally restore it to the volume Y.
5. If volume X is down, users can activate volume Y immediately. Once activated, volume Y will become a
normal Longhorn volume.
5.1. Notice that deactivate a normal volume is not allowed.
## About Activating Disaster Recovery Volume
1. A disaster recovery volume doesn't support creating/deleting/reverting snapshot, creating backup, creating
PV/PVC. Users cannot update `Backup Target` in Settings if any disaster recovery volumes exist.
2. When users try to activate a disaster recovery volume, Longhorn will check the last backup of the original volume. If
it hasn't been restored, the restoration will be started, and the activate action will fail. Users need to wait for
the restoration to complete before retrying.
3. For disaster recovery volume, `Last Backup` indicates the most recent backup of its original backup volume. If the icon
representing disaster volume is gray, it means the volume is restoring `Last Backup` and users cannot activate this
volume right now; if the icon is blue, it means the volume has restored the `Last Backup`.
## RPO and RTO
Typically incremental restoration is triggered by the periodic backup store update. Users can set backup store update
interval in `Setting - General - Backupstore Poll Interval`. Notice that this interval can potentially impact
Recovery Time Objective(RTO). If it is too long, there may be a large amount of data for the disaster recovery volume to
restore, which will take a long time. As for Recovery Point Objective(RPO), it is determined by recurring backup
scheduling of the backup volume. You can check [here](snapshot-backup.md) to see how to set recurring backup in Longhorn.
e.g.:
If recurring backup scheduling for normal volume A is creating backup every hour, then RPO is 1 hour.
Assuming the volume creates backup every hour, and incrementally restoring data of one backup takes 5 minutes.
If `Backupstore Poll Interval` is 30 minutes, then there will be at most one backup worth of data since last restoration.
The time for restoring one backup is 5 minute, so RTO is 5 minutes.
If `Backupstore Poll Interval` is 12 hours, then there will be at most 12 backups worth of data since last restoration.
The time for restoring the backups is 5 * 12 = 60 minutes, so RTO is 60 minutes.

39
docs/k8s-workload.md Normal file
View File

@ -0,0 +1,39 @@
# Workload identification for volume
Now users can identify current workloads or workload history for existing Longhorn volumes.
```
PV Name: test1-pv
PV Status: Bound
Namespace: default
PVC Name: test1-pvc
Last Pod Name: volume-test-1
Last Pod Status: Running
Last Workload Name: volume-test
Last Workload Type: Statefulset
Last time used by Pod: a few seconds ago
```
## About historical status
There are a few fields can contain the historical status instead of the current status.
Those fields can be used to help users figuring out which workload has used the volume in the past:
1. `Last time bound with PVC`: If this field is set, it indicates currently there is no bounded PVC for this volume.
The related fields will show the most recent bounded PVC.
2. `Last time used by Pod`: If these fields are set, they indicates currently there is no workload using this volume.
The related fields will show the most recent workload using this volume.
# PV/PVC creation for existing Longhorn volume
Now users can create PV/PVC via our Longhorn UI for the existing Longhorn volumes.
Only detached volume can be used by newly created pod.
## About special fields of PV/PVC
Since the Longhorn volume already exists while creating PV/PVC, StorageClass is not needed for dynamically provisioning
Longhorn volume. However, the field `storageClassName` would be set in PVC/PV, to be used for PVC bounding purpose. And
it's unnecessary for users create the related StorageClass object.
By default the StorageClass for Longhorn created PV/PVC is `longhorn-static`. Users can modified it in
`Setting - General - Default Longhorn Static StorageClass Name` as they need.
Users need to manually delete PVC and PV created by Longhorn.

View File

@ -23,10 +23,7 @@ A backup target represents a backupstore in Longhorn. The backup target can be s
See [here](#set-backuptarget) for details on how to setup backup target. See [here](#set-backuptarget) for details on how to setup backup target.
### Recurring snapshot and backup Longhorn also supports setting up recurring snapshot/backup jobs for volumes, via Longhorn UI or Kubernetes Storage Class. See [here](#setup-recurring-snapshotbackup) for details.
Longhorn supports recurring snapshot and backup for volumes. User only need to set when he/she wish to take the snapshot and/or backup, and how many snapshots/backups needs to be retains, then Longhorn will automatically create snapshot/backup for the user at that time, as long as the volume is attached to a node.
User can find the setting for the recurring snapshot and backup in the `Volume Detail` page.
## Set BackupTarget ## Set BackupTarget
@ -129,3 +126,45 @@ nfs://longhorn-test-nfs-svc.default:/opt/backupstore
``` ```
You can find an example NFS backupstore for testing purpose [here](https://github.com/rancher/longhorn/blob/master/deploy/backupstores/nfs-backupstore.yaml). You can find an example NFS backupstore for testing purpose [here](https://github.com/rancher/longhorn/blob/master/deploy/backupstores/nfs-backupstore.yaml).
# Setup recurring snapshot/backup
Longhorn supports recurring snapshot and backup for volumes. User only need to set when he/she wish to take the snapshot and/or backup, and how many snapshots/backups needs to be retains, then Longhorn will automatically create snapshot/backup for the user at that time, as long as the volume is attached to a node.
Users can setup recurring snapshot/backup via Longhorn UI, or Kubernetes StorageClass.
## Set up recurring jobs using Longhorn UI
User can find the setting for the recurring snapshot and backup in the `Volume Detail` page.
## Set up recurring jobs using StorageClass
Users can set field `recurringJobs` in StorageClass as parameters. Any future volumes created using this StorageClass will have those recurring jobs automatically set up.
Field `recurringJobs` should follow JSON format. e.g.
```
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: longhorn
provisioner: rancher.io/longhorn
parameters:
numberOfReplicas: "3"
staleReplicaTimeout: "30"
fromBackup: ""
recurringJobs: '[{"name":"snap", "task":"snapshot", "cron":"*/1 * * * *", "retain":1},
{"name":"backup", "task":"backup", "cron":"*/2 * * * *", "retain":1}]'
```
Explanation:
1. `name`: Name of one job. Do not use duplicate name in one `recurringJobs`. And the length of `name` should be no more than 8 characters.
2. `task`: Type of one job. It supports `snapshot` (periodically create snapshot) or `backup` (periodically create snapshot then do backup) only.
3. `cron`: Cron expression. It tells execution time of one job.
4. `retain`: How many snapshots/backups Longhorn will retain for one job. It should be no less than 1.

View File

@ -7,3 +7,5 @@ parameters:
numberOfReplicas: "3" numberOfReplicas: "3"
staleReplicaTimeout: "30" staleReplicaTimeout: "30"
fromBackup: "" fromBackup: ""
# recurringJobs: '[{"name":"snap", "task":"snapshot", "cron":"*/1 * * * *", "retain":1},
# {"name":"backup", "task":"backup", "cron":"*/2 * * * *", "retain":1}]'

View File

@ -1,8 +1,49 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: longhorn-uninstall-service-account
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: longhorn-uninstall-role
rules:
- apiGroups:
- apiextensions.k8s.io
resources:
- customresourcedefinitions
verbs:
- "*"
- apiGroups: [""]
resources: ["pods", "persistentvolumes", "persistentvolumeclaims"]
verbs: ["*"]
- apiGroups: ["apps"]
resources: ["daemonsets", "statefulsets", "deployments"]
verbs: ["*"]
- apiGroups: ["batch"]
resources: ["jobs", "cronjobs"]
verbs: ["*"]
- apiGroups: ["longhorn.rancher.io"]
resources: ["volumes", "engines", "replicas", "settings", "engineimages", "nodes"]
verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: longhorn-uninstall-bind
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: longhorn-uninstall-role
subjects:
- kind: ServiceAccount
name: longhorn-uninstall-service-account
namespace: default
---
apiVersion: batch/v1 apiVersion: batch/v1
kind: Job kind: Job
metadata: metadata:
name: longhorn-uninstall name: longhorn-uninstall
namespace: longhorn-system
spec: spec:
activeDeadlineSeconds: 900 activeDeadlineSeconds: 900
backoffLimit: 1 backoffLimit: 1
@ -12,16 +53,14 @@ spec:
spec: spec:
containers: containers:
- name: longhorn-uninstall - name: longhorn-uninstall
image: rancher/longhorn-manager:v0.4.1 image: rancher/longhorn-manager:v0.5.0
imagePullPolicy: Always imagePullPolicy: Always
command: command:
- longhorn-manager - longhorn-manager
- uninstall - uninstall
- --force - --force
env: env:
- name: POD_NAMESPACE - name: LONGHORN_NAMESPACE
valueFrom: value: longhorn-system
fieldRef:
fieldPath: metadata.namespace
restartPolicy: OnFailure restartPolicy: OnFailure
serviceAccountName: longhorn-service-account serviceAccountName: longhorn-uninstall-service-account