Merge pull request #886 from longhorn/v0.7.0

V0.7.0
This commit is contained in:
Sheng Yang 2019-11-14 22:58:22 -08:00 committed by GitHub
commit ebeaff7d67
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
26 changed files with 594 additions and 277 deletions

View File

@ -23,9 +23,9 @@ You can read more technical details of Longhorn [here](http://rancher.com/micros
## Current status
Longhorn is alpha-quality software. We appreciate your willingness to deploy Longhorn and provide feedback.
Longhorn is beta-quality software. We appreciate your willingness to deploy Longhorn and provide feedback.
The latest release of Longhorn is **v0.6.2**.
The latest release of Longhorn is **v0.7.0**.
## Source code
Longhorn is 100% open source software. Project source code is spread across a number of repos:
@ -34,7 +34,7 @@ Longhorn is 100% open source software. Project source code is spread across a nu
1. Longhorn manager -- Longhorn orchestration, includes Flexvolume driver for Kubernetes https://github.com/longhorn/longhorn-manager
1. Longhorn UI -- Dashboard https://github.com/longhorn/longhorn-ui
![Longhorn UI](https://s3-us-west-1.amazonaws.com/rancher-longhorn/Longhorn_UI.png)
![Longhorn UI](./longhorn-ui.png)
# Requirements
@ -222,6 +222,7 @@ More examples are available at `./examples/`
### [Use CSI driver on RancherOS/CoreOS + RKE or K3S](./docs/csi-config.md)
### [Restore a backup to an image file](./docs/restore-to-file.md)
### [Disaster Recovery Volume](./docs/dr-volume.md)
### [Recover volume after unexpected detachment](./docs/recover-volume.md)
# Troubleshooting
You can click `Generate Support Bundle` link at the bottom of the UI to download a zip file contains Longhorn related configuration and logs.

View File

@ -1,8 +1,8 @@
apiVersion: v1
name: longhorn
version: 0.6.2
appVersion: v0.6.2
kubeVersion: ">=v1.12.0-r0"
version: 0.7.0
appVersion: v0.7.0
kubeVersion: ">=v1.14.0-r0"
description: Longhorn is a distributed block storage system for Kubernetes powered by Rancher Labs.
keywords:
- longhorn
@ -21,4 +21,4 @@ sources:
maintainers:
- name: rancher
email: charts@rancher.com
icon: https://s3.us-east-2.amazonaws.com/longhorn-assets/longhorn-logo.svg
icon: https://raw.githubusercontent.com/cncf/artwork/master/projects/longhorn/horizontal/color/longhorn-horizontal-color.svg?sanitize=true

View File

@ -8,15 +8,15 @@ The following document pertains to running Longhorn from the Rancher 2.0 chart.
Longhorn is 100% open source software. Project source code is spread across a number of repos:
1. Longhorn Engine -- Core controller/replica logic https://github.com/rancher/longhorn-engine
2. Longhorn Manager -- Longhorn orchestration, includes Flexvolume driver for Kubernetes https://github.com/rancher/longhorn-manager
3. Longhorn UI -- Dashboard https://github.com/rancher/longhorn-ui
1. Longhorn Engine -- Core controller/replica logic https://github.com/longhorn/longhorn-engine
2. Longhorn Manager -- Longhorn orchestration, includes CSI driver for Kubernetes https://github.com/longhorn/longhorn-manager
3. Longhorn UI -- Dashboard https://github.com/longhorn/longhorn-ui
## Prerequisites
1. Rancher v2.1+
2. Docker v1.13+
3. Kubernetes v1.8+ cluster with 1 or more nodes and Mount Propagation feature enabled. If your Kubernetes cluster was provisioned by Rancher v2.0.7+ or later, MountPropagation feature is enabled by default. [Check your Kubernetes environment now](https://github.com/rancher/longhorn#environment-check-script). If MountPropagation is disabled, the Kubernetes Flexvolume driver will be deployed instead of the default CSI driver. Base Image feature will also be disabled if MountPropagation is disabled.
3. Kubernetes v1.14+
4. Make sure `curl`, `findmnt`, `grep`, `awk` and `blkid` has been installed in all nodes of the Kubernetes cluster.
5. Make sure `open-iscsi` has been installed in all nodes of the Kubernetes cluster. For GKE, recommended Ubuntu as guest OS image since it contains `open-iscsi` already.
@ -37,23 +37,10 @@ Redeploy the (same version) Longhorn App. Follow the uninstallation procedure ab
If your CRD instances or the CRDs themselves can't be deleted for whatever reason, run the commands below to clean up. Caution: this will wipe all Longhorn state!
```
# Delete CRD finalizers, instances and definitions
for crd in $(kubectl get crd -o jsonpath={.items[*].metadata.name} | tr ' ' '\n' | grep longhorn.rancher.io); do
kubectl -n ${NAMESPACE} get $crd -o yaml | sed "s/\- longhorn.rancher.io//g" | kubectl apply -f -
kubectl -n ${NAMESPACE} delete $crd --all
kubectl delete crd/$crd
done
# Delete CRD instances and definitions
curl -s https://raw.githubusercontent.com/longhorn/longhorn-manager/master/hack/cleancrds.sh |bash -s v062
curl -s https://raw.githubusercontent.com/longhorn/longhorn-manager/master/hack/cleancrds.sh |bash -s v070
```
### Volume can be attached/detached from UI, but Kubernetes Pod/StatefulSet etc cannot use it
Check if volume plugin directory has been set correctly. This is automatically detected unless user explicitly set it.
By default, Kubernetes uses `/usr/libexec/kubernetes/kubelet-plugins/volume/exec/`, as stated in the [official document](https://github.com/kubernetes/community/blob/master/contributors/devel/flexvolume.md#prerequisites).
Some vendors choose to change the directory for various reasons. For example, GKE uses `/home/kubernetes/flexvolume` instead.
User can find the correct directory by running `ps aux|grep kubelet` on the host and check the `--volume-plugin-dir` parameter. If there is none, the default `/usr/libexec/kubernetes/kubelet-plugins/volume/exec/` will be used.
---
Please see [link](https://github.com/rancher/longhorn) for more information.
Please see [link](https://github.com/longhorn/longhorn) for more information.

View File

@ -4,73 +4,62 @@ labels:
io.rancher.certified: experimental
namespace: longhorn-system
questions:
- variable: driver
default: csi
description: "Deploy either the CSI or FlexVolume driver. CSI is newer but requires MountPropagation, a feature enabled by default in Kubernetes v1.10 and later"
type: enum
options:
- csi
- flexvolume
label: Longhorn Kubernetes Driver
group: "Longhorn Driver Settings"
show_subquestion_if: flexvolume
subquestions:
- variable: persistence.flexvolumePath
default: ""
description: "Leave blank to autodetect. For RKE, use `/var/lib/kubelet/volumeplugins`. For GKE, use `/home/kubernetes/flexvolume/` instead. Users can find the correct directory by running `ps aux|grep kubelet` on the host and check the --volume-plugin-dir parameter. If there is none, the default `/usr/libexec/kubernetes/kubelet-plugins/volume/exec/` will be used."
type: string
label: Longhorn Flexvolume Path
show_subquestion_if: csi
subquestions:
- variable: csi.attacherImage
default:
description: "Specify CSI attacher image. Leave blank to autodetect."
type: string
label: Longhorn CSI Attacher Image
- variable: csi.provisionerImage
default:
description: "Specify CSI provisioner image. Leave blank to autodetect."
type: string
label: Longhorn CSI Provisioner Image
- variable: csi.driverRegistrarImage
default:
description: "Specify CSI Driver Registrar image. Leave blank to autodetect."
type: string
label: Longhorn CSI Driver Registrar Image
- variable: csi.kubeletRootDir
default:
description: "Specify kubelet root-dir. Leave blank to autodetect."
type: string
label: Kubelet Root Directory
- variable: csi.attacherReplicaCount
type: int
default:
min: 1
max: 10
description: "Specify number of CSI Attacher replica. By default 3."
label: Longhorn CSI Attacher replica count
- variable: csi.provisionerReplicaCount
type: int
default:
min: 1
max: 10
description: "Specify number of CSI Provisioner replica. By default 3."
label: Longhorn CSI Provisioner replica count
- variable: csi.attacherImage
default:
description: "Specify CSI attacher image. Leave blank to autodetect."
type: string
label: Longhorn CSI Attacher Image
group: "Longhorn CSI Driver Settings"
- variable: csi.provisionerImage
default:
description: "Specify CSI provisioner image. Leave blank to autodetect."
type: string
label: Longhorn CSI Provisioner Image
group: "Longhorn CSI Driver Settings"
- variable: csi.driverRegistrarImage
default:
description: "Specify CSI Driver Registrar image. Leave blank to autodetect."
type: string
label: Longhorn CSI Driver Registrar Image
group: "Longhorn CSI Driver Settings"
- variable: csi.kubeletRootDir
default:
description: "Specify kubelet root-dir. Leave blank to autodetect."
type: string
label: Kubelet Root Directory
group: "Longhorn CSI Driver Settings"
- variable: csi.attacherReplicaCount
type: int
default:
min: 1
max: 10
description: "Specify replica count of CSI Attacher. By default 3."
label: Longhorn CSI Attacher replica count
group: "Longhorn CSI Driver Settings"
- variable: csi.provisionerReplicaCount
type: int
default:
min: 1
max: 10
description: "Specify replica count of CSI Provisioner. By default 3."
label: Longhorn CSI Provisioner replica count
group: "Longhorn CSI Driver Settings"
- variable: persistence.defaultClass
default: "true"
description: "Set as default StorageClass"
group: "Longhorn Driver Settings"
group: "Longhorn CSI Driver Settings"
type: boolean
required: true
label: Default Storage Class
- variable: persistence.defaultClassReplicaCount
description: "Set replica count for default StorageClass"
group: "Longhorn Driver Settings"
group: "Longhorn CSI Driver Settings"
type: int
default: 3
min: 1
max: 10
label: Default Storage Class Replica Count
- variable: defaultSettings.backupTarget
label: Backup Target
description: "The target used for backup. Support NFS or S3."

View File

@ -22,11 +22,19 @@ rules:
resources: ["jobs", "cronjobs"]
verbs: ["*"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses", "volumeattachments"]
resources: ["storageclasses", "volumeattachments", "csinodes", "csidrivers"]
verbs: ["*"]
- apiGroups: ["csi.storage.k8s.io"]
resources: ["csinodeinfos"]
verbs: ["get", "list", "watch"]
- apiGroups: ["coordination.k8s.io"]
resources: ["leases"]
verbs: ["get", "watch", "list", "delete", "update", "create"]
- apiGroups: ["longhorn.io"]
resources: ["volumes", "volumes/status", "engines", "engines/status", "replicas", "replicas/status", "settings",
"engineimages", "engineimages/status", "nodes", "nodes/status", "instancemanagers", "instancemanagers/status"]
verbs: ["*"]
- apiGroups: ["coordination.k8s.io"]
resources: ["leases"]
verbs: ["*"]
# to be removed after v0.7.0
- apiGroups: ["longhorn.rancher.io"]
resources: ["volumes", "engines", "replicas", "settings", "engineimages", "nodes", "instancemanagers"]
verbs: ["*"]

View File

@ -1,5 +1,143 @@
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
labels:
longhorn-manager: Engine
name: engines.longhorn.io
spec:
group: longhorn.io
names:
kind: Engine
listKind: EngineList
plural: engines
shortNames:
- lhe
singular: engine
scope: Namespaced
version: v1beta1
subresources:
status: {}
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
labels:
longhorn-manager: Replica
name: replicas.longhorn.io
spec:
group: longhorn.io
names:
kind: Replica
listKind: ReplicaList
plural: replicas
shortNames:
- lhr
singular: replica
scope: Namespaced
version: v1beta1
subresources:
status: {}
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
labels:
longhorn-manager: Setting
name: settings.longhorn.io
spec:
group: longhorn.io
names:
kind: Setting
listKind: SettingList
plural: settings
shortNames:
- lhs
singular: setting
scope: Namespaced
version: v1beta1
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
labels:
longhorn-manager: Volume
name: volumes.longhorn.io
spec:
group: longhorn.io
names:
kind: Volume
listKind: VolumeList
plural: volumes
shortNames:
- lhv
singular: volume
scope: Namespaced
version: v1beta1
subresources:
status: {}
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
labels:
longhorn-manager: EngineImage
name: engineimages.longhorn.io
spec:
group: longhorn.io
names:
kind: EngineImage
listKind: EngineImageList
plural: engineimages
shortNames:
- lhei
singular: engineimage
scope: Namespaced
version: v1beta1
subresources:
status: {}
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
labels:
longhorn-manager: Node
name: nodes.longhorn.io
spec:
group: longhorn.io
names:
kind: Node
listKind: NodeList
plural: nodes
shortNames:
- lhn
singular: node
scope: Namespaced
version: v1beta1
subresources:
status: {}
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
labels:
longhorn-manager: InstanceManager
name: instancemanagers.longhorn.io
spec:
group: longhorn.io
names:
kind: InstanceManager
listKind: InstanceManagerList
plural: instancemanagers
shortNames:
- lhim
singular: instancemanager
scope: Namespaced
version: v1beta1
subresources:
status: {}
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
labels:
longhorn-manager: Engine
@ -122,4 +260,4 @@ spec:
- lhim
singular: instancemanager
scope: Namespaced
version: v1alpha1
version: v1alpha1

View File

@ -29,8 +29,6 @@ spec:
- "{{ .Values.image.longhorn.manager }}:{{ .Values.image.longhorn.managerTag }}"
- --manager-url
- http://longhorn-backend:9500/v1
- --driver
- "{{ .Values.driver }}"
env:
- name: POD_NAMESPACE
valueFrom:
@ -44,8 +42,6 @@ spec:
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
- name: FLEXVOLUME_DIR
value: {{ .Values.persistence.flexvolumePath }}
{{- if .Values.csi.kubeletRootDir }}
- name: KUBELET_ROOT_DIR
value: {{ .Values.csi.kubeletRootDir }}

View File

@ -20,8 +20,6 @@ spec:
command:
- longhorn-manager
- post-upgrade
- --from-version
- 0.0.1
env:
- name: POD_NAMESPACE
valueFrom:

View File

@ -9,7 +9,7 @@ metadata:
annotations:
storageclass.beta.kubernetes.io/is-default-class: "false"
{{- end }}
provisioner: rancher.io/longhorn
provisioner: driver.longhorn.io
parameters:
numberOfReplicas: "{{ .Values.persistence.defaultClassReplicaCount }}"
staleReplicaTimeout: "30"

View File

@ -4,11 +4,11 @@
image:
longhorn:
engine: longhornio/longhorn-engine
engineTag: v0.6.2
engineTag: v0.7.0
manager: longhornio/longhorn-manager
managerTag: v0.6.2
managerTag: v0.7.0
ui: longhornio/longhorn-ui
uiTag: v0.6.2
uiTag: v0.7.0
pullPolicy: IfNotPresent
service:
@ -19,13 +19,7 @@ service:
type: ClusterIP
nodePort: ""
# deploy either 'flexvolume' or 'csi' driver
driver: csi
persistence:
# for GKE uses /home/kubernetes/flexvolume/ instead, User can find the correct directory by running ps aux|grep kubelet on the host and check the --volume-plugin-dir parameter.
# If there is none, the default /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ will be used.
flexvolumePath:
defaultClass: true
defaultClassReplicaCount: 3

View File

@ -33,11 +33,19 @@ rules:
resources: ["jobs", "cronjobs"]
verbs: ["*"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses", "volumeattachments"]
resources: ["storageclasses", "volumeattachments", "csinodes", "csidrivers"]
verbs: ["*"]
- apiGroups: ["csi.storage.k8s.io"]
resources: ["csinodeinfos"]
verbs: ["get", "list", "watch"]
- apiGroups: ["coordination.k8s.io"]
resources: ["leases"]
verbs: ["get", "watch", "list", "delete", "update", "create"]
- apiGroups: ["longhorn.io"]
resources: ["volumes", "volumes/status", "engines", "engines/status", "replicas", "replicas/status", "settings",
"engineimages", "engineimages/status", "nodes", "nodes/status", "instancemanagers", "instancemanagers/status"]
verbs: ["*"]
- apiGroups: ["coordination.k8s.io"]
resources: ["leases"]
verbs: ["*"]
# to be removed after v0.7.0
- apiGroups: ["longhorn.rancher.io"]
resources: ["volumes", "engines", "replicas", "settings", "engineimages", "nodes", "instancemanagers"]
verbs: ["*"]
@ -60,9 +68,9 @@ kind: CustomResourceDefinition
metadata:
labels:
longhorn-manager: Engine
name: engines.longhorn.rancher.io
name: engines.longhorn.io
spec:
group: longhorn.rancher.io
group: longhorn.io
names:
kind: Engine
listKind: EngineList
@ -71,16 +79,18 @@ spec:
- lhe
singular: engine
scope: Namespaced
version: v1alpha1
version: v1beta1
subresources:
status: {}
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
labels:
longhorn-manager: Replica
name: replicas.longhorn.rancher.io
name: replicas.longhorn.io
spec:
group: longhorn.rancher.io
group: longhorn.io
names:
kind: Replica
listKind: ReplicaList
@ -89,16 +99,18 @@ spec:
- lhr
singular: replica
scope: Namespaced
version: v1alpha1
version: v1beta1
subresources:
status: {}
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
labels:
longhorn-manager: Setting
name: settings.longhorn.rancher.io
name: settings.longhorn.io
spec:
group: longhorn.rancher.io
group: longhorn.io
names:
kind: Setting
listKind: SettingList
@ -107,16 +119,16 @@ spec:
- lhs
singular: setting
scope: Namespaced
version: v1alpha1
version: v1beta1
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
labels:
longhorn-manager: Volume
name: volumes.longhorn.rancher.io
name: volumes.longhorn.io
spec:
group: longhorn.rancher.io
group: longhorn.io
names:
kind: Volume
listKind: VolumeList
@ -125,16 +137,18 @@ spec:
- lhv
singular: volume
scope: Namespaced
version: v1alpha1
version: v1beta1
subresources:
status: {}
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
labels:
longhorn-manager: EngineImage
name: engineimages.longhorn.rancher.io
name: engineimages.longhorn.io
spec:
group: longhorn.rancher.io
group: longhorn.io
names:
kind: EngineImage
listKind: EngineImageList
@ -143,16 +157,18 @@ spec:
- lhei
singular: engineimage
scope: Namespaced
version: v1alpha1
version: v1beta1
subresources:
status: {}
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
labels:
longhorn-manager: Node
name: nodes.longhorn.rancher.io
name: nodes.longhorn.io
spec:
group: longhorn.rancher.io
group: longhorn.io
names:
kind: Node
listKind: NodeList
@ -161,16 +177,18 @@ spec:
- lhn
singular: node
scope: Namespaced
version: v1alpha1
version: v1beta1
subresources:
status: {}
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
labels:
longhorn-manager: InstanceManager
name: instancemanagers.longhorn.rancher.io
name: instancemanagers.longhorn.io
spec:
group: longhorn.rancher.io
group: longhorn.io
names:
kind: InstanceManager
listKind: InstanceManagerList
@ -179,7 +197,9 @@ spec:
- lhim
singular: instancemanager
scope: Namespaced
version: v1alpha1
version: v1beta1
subresources:
status: {}
---
apiVersion: v1
kind: ConfigMap
@ -220,7 +240,7 @@ spec:
spec:
containers:
- name: longhorn-manager
image: longhornio/longhorn-manager:v0.6.2
image: longhornio/longhorn-manager:v0.7.0
imagePullPolicy: Always
securityContext:
privileged: true
@ -229,9 +249,9 @@ spec:
- -d
- daemon
- --engine-image
- longhornio/longhorn-engine:v0.6.2
- longhornio/longhorn-engine:v0.7.0
- --manager-image
- longhornio/longhorn-manager:v0.6.2
- longhornio/longhorn-manager:v0.7.0
- --service-account
- longhorn-service-account
ports:
@ -247,7 +267,7 @@ spec:
mountPath: /var/lib/rancher/longhorn/
mountPropagation: Bidirectional
- name: longhorn-default-setting
mountPath: /var/lib/longhorn/setting/
mountPath: /var/lib/longhorn-setting/
env:
- name: POD_NAMESPACE
valueFrom:
@ -263,7 +283,7 @@ spec:
fieldPath: spec.nodeName
# Should be: mount path of the volume longhorn-default-setting + the key of the configmap data in 04-default-setting.yaml
- name: DEFAULT_SETTING_PATH
value: /var/lib/longhorn/setting/default-setting.yaml
value: /var/lib/longhorn-setting/default-setting.yaml
volumes:
- name: dev
hostPath:
@ -316,7 +336,7 @@ spec:
spec:
containers:
- name: longhorn-ui
image: longhornio/longhorn-ui:v0.6.2
image: longhornio/longhorn-ui:v0.7.0
ports:
- containerPort: 8000
env:
@ -356,26 +376,20 @@ spec:
spec:
initContainers:
- name: wait-longhorn-manager
image: longhornio/longhorn-manager:v0.6.2
image: longhornio/longhorn-manager:v0.7.0
command: ['sh', '-c', 'while [ $(curl -m 1 -s -o /dev/null -w "%{http_code}" http://longhorn-backend:9500/v1) != "200" ]; do echo waiting; sleep 2; done']
containers:
- name: longhorn-driver-deployer
image: longhornio/longhorn-manager:v0.6.2
image: longhornio/longhorn-manager:v0.7.0
imagePullPolicy: Always
command:
- longhorn-manager
- -d
- deploy-driver
- --manager-image
- longhornio/longhorn-manager:v0.6.2
- longhornio/longhorn-manager:v0.7.0
- --manager-url
- http://longhorn-backend:9500/v1
# manually choose "flexvolume" or "csi"
#- --driver
#- flexvolume
# manually set root directory for flexvolume
#- --flexvolume-dir
#- /usr/libexec/kubernetes/kubelet-plugins/volume/exec/
# manually set root directory for csi
#- --kubelet-root-dir
#- /var/lib/rancher/k3s/agent/kubelet
@ -398,11 +412,20 @@ spec:
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
# For auto detection, leave this parameter unset
#- name: FLEXVOLUME_DIR
# FOR RKE
#value: "/var/lib/kubelet/volumeplugins"
# FOR GKE
#value: "/home/kubernetes/flexvolume/"
serviceAccountName: longhorn-service-account
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: longhorn
provisioner: driver.longhorn.io
parameters:
numberOfReplicas: "3"
staleReplicaTimeout: "2880"
fromBackup: ""
# diskSelector: "ssd,fast"
# nodeSelector: "storage,fast"
# recurringJobs: '[{"name":"snap", "task":"snapshot", "cron":"*/1 * * * *", "retain":1},
# {"name":"backup", "task":"backup", "cron":"*/2 * * * *", "retain":1,
# "labels": {"interval":"2m"}}]'
---

View File

@ -53,7 +53,16 @@
```
#### K3S:
No extra configuration is needed as long as you have `open-iscsi` or `iscsiadm` installed on the node.
##### 1. For Longhorn v0.7.0+ (v0.7.0 is included)
Longhorn v0.7.0+ supports k3s v0.10.0+ (v0.10.0 is included) only by default.
If you want to deploy these new Longhorn versions on k3s v0.10.0- (v0.10.0 is not included), you need to set `--kubelet-root-dir` to `<data-dir>/agent/kubelet` for the Deployment `longhorn-driver-deployer` in `longhorn/deploy/longhorn.yaml`.
`data-dir` is a `k3s` arg and it can be set when you launch a k3s server. By default it is `/var/lib/rancher/k3s`.
##### 2. For Longhorn v0.7.0- (v0.7.0 is not included)
Longhorn v0.7.0- supports k3s v0.10.0- (v0.10.0 is not included) by default.
If you want to deploy these Longhorn versions on k3s v0.10.0+, you need to set `--kubelet-root-dir` to `/var/lib/kubelet` for the Deployment `longhorn-driver-deployer` in `longhorn/deploy/longhorn.yaml`
## Troubleshooting
### Common issues
@ -79,22 +88,27 @@ You will find `root-dir` in the cmdline of proc `kubelet`. If it's not set, the
If kubelet is using a configuration file, you would need to check the configuration file to locate the `root-dir` parameter.
**For K3S**
**For K3S v0.10.0-**
Run `ps aux | grep k3s` and get argument `--data-dir` or `-d` on k3s node.
Run `ps aux | grep k3s` and get argument `--data-dir` or `-d` on k3s server node.
e.g.
```
$ ps uax | grep k3s
root 4160 0.0 0.0 51420 3948 pts/0 S+ 00:55 0:00 sudo /usr/local/bin/k3s server --data-dir /opt/test/kubelet
root 4161 49.0 4.0 259204 164292 pts/0 Sl+ 00:55 0:04 /usr/local/bin/k3s server --data-dir /opt/test/kubelet
root 4160 0.0 0.0 51420 3948 pts/0 S+ 00:55 0:00 sudo /usr/local/bin/k3s server --data-dir /opt/test/k3s/data/dir
root 4161 49.0 4.0 259204 164292 pts/0 Sl+ 00:55 0:04 /usr/local/bin/k3s server --data-dir /opt/test/k3s/data/dir
```
You will find `data-dir` in the cmdline of proc `k3s`. By default it is not set and `/var/lib/rancher/k3s` will be used. Then joining `data-dir` with `/agent/kubelet` you will get the `root-dir`. So the default `root-dir` for K3S is `/var/lib/rancher/k3s/agent/kubelet`.
If K3S is using a configuration file, you would need to check the configuration file to locate the `data-dir` parameter.
**For K3S v0.10.0+**
It is always `/var/lib/kubelet`
## Background
CSI doesn't work with RancherOS/CoreOS + RKE before Longhorn v0.4.1. The reason is:
#### CSI doesn't work with RancherOS/CoreOS + RKE before Longhorn v0.4.1
The reason is:
1. RKE sets argument `root-dir=/opt/rke/var/lib/kubelet` for kubelet in the case of RancherOS or CoreOS, which is different from the default value `/var/lib/kubelet`.
@ -115,6 +129,9 @@ CSI doesn't work with RancherOS/CoreOS + RKE before Longhorn v0.4.1. The reason
But this path inside CSI plugin container won't be binded mount on host path. And the mount operation for Longhorn volume is meaningless.
Hence Kubernetes cannot connect to Longhorn using CSI driver.
#### Longhorn v0.7.0- doesn't work on K3S v0.10.0+
K3S now sets its kubelet directory to `/var/lib/kubelet`. See [the K3S release comment](https://github.com/rancher/k3s/releases/tag/v0.10.0) for details.
## Reference
https://github.com/kubernetes-csi/driver-registrar

79
docs/recover-volume.md Normal file
View File

@ -0,0 +1,79 @@
# Recover volume after unexpected detachment
## Overview
1. Now Longhorn can automatically reattach then remount volumes if unexpected detachment happens. e.g., [Kubernetes upgrade](https://github.com/longhorn/longhorn/issues/703), [Docker reboot](https://github.com/longhorn/longhorn/issues/686).
2. After reattachment and remount complete, users may need to **manually restart the related workload containers** for the volume restoration **if the following recommended setup is not applied**.
## Recommended setup when using Longhorn volumes
In order to recover unexpectedly detached volumes automatically, users can set `restartPolicy` to `Always` then add `livenessProbe` for the workloads using Longhorn volumes.
Then those workloads will be restarted automatically after reattachment and remount.
Here is one example for the setup:
```
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: longhorn-volv-pvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: longhorn
resources:
requests:
storage: 2Gi
---
apiVersion: v1
kind: Pod
metadata:
name: volume-test
namespace: default
spec:
restartPolicy: Always
containers:
- name: volume-test
image: nginx:stable-alpine
imagePullPolicy: IfNotPresent
livenessProbe:
exec:
command:
- ls
- /data/lost+found
initialDelaySeconds: 5
periodSeconds: 5
volumeMounts:
- name: volv
mountPath: /data
ports:
- containerPort: 80
volumes:
- name: volv
persistentVolumeClaim:
claimName: longhorn-volv-pvc
```
- The directory used in the `livenessProbe` will be `<volumeMount.mountPath>/lost+found`
- Don't set a short interval for `livenessProbe.periodSeconds`, e.g., 1s. The liveness command is CPU consuming.
## Manually restart workload containers
## This solution is applied only if:
1. The Longhorn volume is reattached automatically.
2. The above setup is not included when the related workload is launched.
### Steps
1. Figure out on which node the related workload's containers are running
```
kubectl -n <namespace of your workload> get pods <workload's pod name> -o wide
```
2. Connect to the node. e.g., `ssh`
3. Figure out the containers belonging to the workload
```
docker ps
```
By checking the columns `COMMAND` and `NAMES` of the output, you can find the corresponding container
4. Restart the container
```
docker restart <the container ID of the workload>
```
### Reason
Typically the volume mount propagation is not `Bidirectional`. It means the Longhorn remount operation won't be propagated to the workload containers if the containers are not restarted.

View File

@ -0,0 +1,128 @@
# Upgrade from v0.6.2 to v0.7.0
The users need to follow this guide to upgrade from v0.6.2 to v0.7.0.
## Preparation
1. Make backups for all the volumes.
1. Stop the workload using the volumes.
1. Live upgrade is not supported from v0.6.2 to v0.7.0
## Upgrade
### Use Rancher App
1. Run the following command to avoid [this error](#error-the-storageclass-longhorn-is-invalid-provisioner-forbidden-updates-to-provisioner-are-forbidden):
```
kubectl delete -f https://raw.githubusercontent.com/longhorn/longhorn/v0.7.0/examples/storageclass.yaml
```
2. Click the `Upgrade` button in the Rancher UI
3. Wait for the app to complete the upgrade.
### Use YAML file
Use `kubectl apply https://raw.githubusercontent.com/longhorn/longhorn/v0.7.0/deploy/longhorn.yaml`
And wait for all the pods to become running and Longhorn UI working.
```
$ kubectl -n longhorn-system get pod
NAME READY STATUS RESTARTS AGE
compatible-csi-attacher-69857469fd-rj5vm 1/1 Running 4 3d12h
csi-attacher-79b9bfc665-56sdb 1/1 Running 0 3d12h
csi-attacher-79b9bfc665-hdj7t 1/1 Running 0 3d12h
csi-attacher-79b9bfc665-tfggq 1/1 Running 3 3d12h
csi-provisioner-68b7d975bb-5ggp8 1/1 Running 0 3d12h
csi-provisioner-68b7d975bb-frggd 1/1 Running 2 3d12h
csi-provisioner-68b7d975bb-zrr65 1/1 Running 0 3d12h
engine-image-ei-605a0f3e-8gx4s 1/1 Running 0 3d14h
engine-image-ei-605a0f3e-97gxx 1/1 Running 0 3d14h
engine-image-ei-605a0f3e-r6wm4 1/1 Running 0 3d14h
instance-manager-e-a90b0bab 1/1 Running 0 3d14h
instance-manager-e-d1458894 1/1 Running 0 3d14h
instance-manager-e-f2caa5e5 1/1 Running 0 3d14h
instance-manager-r-04417b70 1/1 Running 0 3d14h
instance-manager-r-36d9928a 1/1 Running 0 3d14h
instance-manager-r-f25172b1 1/1 Running 0 3d14h
longhorn-csi-plugin-72bsp 4/4 Running 0 3d12h
longhorn-csi-plugin-hlbg8 4/4 Running 0 3d12h
longhorn-csi-plugin-zrvhl 4/4 Running 0 3d12h
longhorn-driver-deployer-66b6d8b97c-snjrn 1/1 Running 0 3d12h
longhorn-manager-pf5p5 1/1 Running 0 3d14h
longhorn-manager-r5npp 1/1 Running 1 3d14h
longhorn-manager-t59kt 1/1 Running 0 3d14h
longhorn-ui-b466b6d74-w7wzf 1/1 Running 0 50m
```
## TroubleShooting
### Error: `"longhorn" is invalid: provisioner: Forbidden: updates to provisioner are forbidden.`
- This means you need to clean up the old `longhorn` storageClass for Longhorn v0.7.0 upgrade, since we've changed the provisioner from `rancher.io/longhorn` to `driver.longhorn.io`.
- Noticed the PVs created by the old storageClass will still use `rancher.io/longhorn` as provisioner. Longhorn v0.7.0 supports attach/detach/deleting of the PVs created by the previous version of Longhorn, but it doesn't support creating new PVs using the old provisioner name. Please use the new StorageClass for the new volumes.
#### If you are using YAML file:
1. Clean up the deprecated StorageClass:
```
kubectl delete -f https://raw.githubusercontent.com/longhorn/longhorn/v0.7.0/examples/storageclass.yaml
```
2. Run
```
kubectl apply https://raw.githubusercontent.com/longhorn/longhorn/v0.7.0/deploy/longhorn.yaml
```
#### If you are using Rancher App:
1. Clean up the default StorageClass:
```
kubectl delete -f https://raw.githubusercontent.com/longhorn/longhorn/v0.7.0/examples/storageclass.yaml
```
2. Follow [this error instruction](#error-kind-customresourcedefinition-with-the-name-xxx-already-exists-in-the-cluster-and-wasnt-defined-in-the-previous-release)
### Error: `kind CustomResourceDefinition with the name "xxx" already exists in the cluster and wasn't defined in the previous release...`
- This is [a Helm bug](https://github.com/helm/helm/issues/6031).
- Please make sure that you have not deleted the old Longhorn CRDs via the command `curl -s https://raw.githubusercontent.com/longhorn/longhorn-manager/master/hack/cleancrds.sh | bash -s v062` or executed Longhorn uninstaller before executing the following command. Otherwise you MAY LOSE all the data stored in the Longhorn system.
1. Clean up the leftover:
```
kubectl -n longhorn-system delete ds longhorn-manager
curl -s https://raw.githubusercontent.com/longhorn/longhorn-manager/master/hack/cleancrds.sh | bash -s v070
```
2. Re-click the `Upgrade` button in the Rancher UI.
## Rollback
Since we upgrade the CSI framework from v0.4.2 to v1.1.0 in this release, rolling back from Longhorn v0.7.0 to v0.6.2 or lower means backward upgrade for the CSI plugin.
But Kubernetes does not support the CSI backward upgrade. **Hence restarting kubelet is unavoidable. Please be careful, check the conditions beforehand and follow the instruction exactly.**
Prerequisite:
* To rollback from v0.7.0 installation, you must haven't executed [the post upgrade steps](#post-upgrade).
Steps to roll back:
1. Clean up the components introduced by Longhorn v0.7.0 upgrade
```
kubectl delete -f https://raw.githubusercontent.com/longhorn/longhorn/v0.7.0/examples/storageclass.yaml
curl -s https://raw.githubusercontent.com/longhorn/longhorn-manager/master/hack/cleancrds.sh | bash -s v070
```
2. Restart the Kubelet container on all nodes or restart all the nodes. This step WILL DISRUPT all the workloads in the system.
Connect to the node then run
```
docker restart kubelet
```
3. Rollback
Use `kubectl apply` or Rancher App to rollback the Longhorn.
#### Migrate the old PVs to use the new StorageClass
TODO
## Post upgrade
1. Bring back the workload online.
1. Make sure all the volumes are back online.
1. Check all the existing manager pods are running v0.7.0. No v0.6.2 pods is running.
1. Run `kubectl -n longhorn-system get pod -o yaml|grep "longhorn-manager:v0.6.2"` should yield no result.
1. Run the following script to clean up the v0.6.2 CRDs.
1. Must make sure all the v0.6.2 pods HAVE BEEN DELETED, otherwise the data WILL BE LOST!
```
curl -s https://raw.githubusercontent.com/longhorn/longhorn-manager/master/hack/cleancrs.sh |bash -s v062
```

View File

@ -4,6 +4,12 @@ Here we cover how to upgrade to latest Longhorn from all previous releases.
There are normally two steps in the upgrade process: first upgrade Longhorn manager to the latest version, then upgrade Longhorn engine to the latest version using latest Longhorn manager.
## Upgrade from v0.6.2 to v0.7.0
See [here](./upgrade-from-v0.6.2-to-v0.7.0.md)
## Upgrade from older versions to v0.6.2
## Upgrade Longhorn manager from v0.3.0 or newer
### From Longhorn App (Rancher Catalog App)

View File

@ -0,0 +1,32 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: longhorn-block-vol
spec:
accessModes:
- ReadWriteOnce
volumeMode: Block
storageClassName: longhorn
resources:
requests:
storage: 2Gi
---
apiVersion: v1
kind: Pod
metadata:
name: block-volume-test
namespace: default
spec:
containers:
- name: block-volume-test
image: nginx:stable-alpine
imagePullPolicy: IfNotPresent
volumeDevices:
- devicePath: /dev/longhorn/testblk
name: block-vol
ports:
- containerPort: 80
volumes:
- name: block-vol
persistentVolumeClaim:
claimName: longhorn-block-vol

View File

@ -9,8 +9,9 @@ spec:
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: longhorn
csi:
driver: io.rancher.longhorn
driver: driver.longhorn.io
fsType: ext4
volumeAttributes:
numberOfReplicas: '3'
@ -28,17 +29,26 @@ spec:
requests:
storage: 2Gi
volumeName: longhorn-vol-pv
storageClassName: longhorn
---
apiVersion: v1
kind: Pod
metadata:
name: volume-test
name: volume-pv-test
namespace: default
spec:
restartPolicy: Always
containers:
- name: volume-test
- name: volume-pv-test
image: nginx:stable-alpine
imagePullPolicy: IfNotPresent
livenessProbe:
exec:
command:
- ls
- /data/lost+found
initialDelaySeconds: 5
periodSeconds: 5
volumeMounts:
- name: vol
mountPath: /data

View File

@ -40,9 +40,17 @@ spec:
labels:
app: mysql
spec:
restartPolicy: Always
containers:
- image: mysql:5.6
name: mysql
livenessProbe:
exec:
command:
- ls
- /var/lib/mysql/lost+found
initialDelaySeconds: 5
periodSeconds: 5
env:
- name: MYSQL_ROOT_PASSWORD
value: changeme

View File

@ -1,41 +0,0 @@
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: longhorn
provisioner: rancher.io/longhorn
parameters:
numberOfReplicas: '3'
staleReplicaTimeout: '2880'
reclaimPolicy: Delete
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: longhorn-vol-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
storageClassName: longhorn
---
apiVersion: v1
kind: Pod
metadata:
name: volume-test
namespace: default
spec:
containers:
- name: volume-test
image: nginx:stable-alpine
imagePullPolicy: IfNotPresent
volumeMounts:
- name: vol
mountPath: /data
ports:
- containerPort: 80
volumes:
- name: vol
persistentVolumeClaim:
claimName: longhorn-vol-pvc

View File

@ -1,25 +0,0 @@
apiVersion: v1
kind: Pod
metadata:
name: volume-test
namespace: default
spec:
containers:
- name: volume-test
image: nginx:stable-alpine
imagePullPolicy: IfNotPresent
volumeMounts:
- name: voll
mountPath: /data
ports:
- containerPort: 80
volumes:
- name: voll
flexVolume:
driver: "rancher.io/longhorn"
fsType: "ext4"
options:
size: "2Gi"
numberOfReplicas: "3"
staleReplicaTimeout: "20"
fromBackup: ""

View File

@ -1,50 +0,0 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: longhorn-volv-pv
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
storageClassName: longhorn
flexVolume:
driver: "rancher.io/longhorn"
fsType: "ext4"
options:
size: "2Gi"
numberOfReplicas: "3"
staleReplicaTimeout: "20"
fromBackup: ""
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: longhorn-volv-pvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: longhorn
resources:
requests:
storage: 2Gi
---
apiVersion: v1
kind: Pod
metadata:
name: volume-test
namespace: default
spec:
containers:
- name: volume-test
image: nginx:stable-alpine
imagePullPolicy: IfNotPresent
volumeMounts:
- name: volv
mountPath: /data
ports:
- containerPort: 80
volumes:
- name: volv
persistentVolumeClaim:
claimName: longhorn-volv-pvc

View File

@ -16,10 +16,18 @@ metadata:
name: volume-test
namespace: default
spec:
restartPolicy: Always
containers:
- name: volume-test
image: nginx:stable-alpine
imagePullPolicy: IfNotPresent
livenessProbe:
exec:
command:
- ls
- /data/lost+found
initialDelaySeconds: 5
periodSeconds: 5
volumeMounts:
- name: volv
mountPath: /data

View File

@ -27,10 +27,18 @@ spec:
labels:
app: nginx # has to match .spec.selector.matchLabels
spec:
restartPolicy: Always
terminationGracePeriodSeconds: 10
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
livenessProbe:
exec:
command:
- ls
- /usr/share/nginx/html/lost+found
initialDelaySeconds: 5
periodSeconds: 5
ports:
- containerPort: 80
name: web

View File

@ -2,7 +2,7 @@ kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: longhorn
provisioner: rancher.io/longhorn
provisioner: driver.longhorn.io
parameters:
numberOfReplicas: "3"
staleReplicaTimeout: "2880"

BIN
longhorn-ui.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 400 KiB

View File

@ -24,7 +24,10 @@ rules:
- apiGroups: ["batch"]
resources: ["jobs", "cronjobs"]
verbs: ["*"]
- apiGroups: ["longhorn.rancher.io"]
- apiGroups: ["storage.k8s.io"]
resources: ["csidrivers"]
verbs: ["*"]
- apiGroups: ["longhorn.io"]
resources: ["volumes", "engines", "replicas", "settings", "engineimages", "nodes", "instancemanagers"]
verbs: ["*"]
---
@ -55,7 +58,7 @@ spec:
spec:
containers:
- name: longhorn-uninstall
image: longhornio/longhorn-manager:v0.6.2
image: longhornio/longhorn-manager:v0.7.0-rc2
imagePullPolicy: Always
command:
- longhorn-manager