Longhorn is a distributed block storage system built using containers and microservices. Longhorn creates a dedicated storage controller for each block device volume and sychronously replicates the volume across multiple replicas stored on multiple hosts. The storage controller and replicas are implemented using containers and are managed using a container orchestration system.
Longhorn is lightweight, reliable, and easy-to-use. It is particularly suitable as persistent storage for containers. It supports snapshots, backups, and even allows you to schedule recurring snapshots and backups!
4. Make sure `open-iscsi` has been installed in all nodes of the Kubernetes cluster. For GKE, recommended Ubuntu as guest OS image since it contains `open-iscsi` already.
Create the deployment of Longhorn in your Kubernetes cluster is easy. For most Kubernetes setup (except GKE), you will only need to run `kubectl create -f deploy/example.yaml`.
Notice this field in YAML file `flexVolume.driver "rancher.io/longhorn"`. It specifies Longhorn FlexVolume plug-in shoule be used. There are some options fields in `options` user can fill.
Option | Required | Description
------------- | ----|---------
size | Yes | Specify the capacity of the volume in longhorn and the unit should be `G`
numberOfReplicas | Yes | The number of replica (HA feature) for volume in this Longhorn volume
fromBackup | No | In Longhorn Backup URL. Specify where user want to restore the volume from (Optional)
### Persistent Volume
This example shows how to use a YAML definition to manage Persistent Volume(PV).
```
apiVersion: v1
kind: PersistentVolume
metadata:
name: longhorn-volv-pv
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
storageClassName: longhorn
flexVolume:
driver: "rancher.io/longhorn"
fsType: "ext4"
options:
size: "2G"
numberOfReplicas: "2"
staleReplicaTimeout: "20"
fromBackup: ""
```
The next YAML shows a Persistent Volume Claim (PVC) that matched the PV defined above.
Longhorn supports backing up to a NFS server. In order to use this feature, you need to have a NFS server running and accessible in the Kubernetes cluster. Here we provides a simple way help to setup a testing NFS server.
It will create a simple NFS server in the `default` namespace, which can be addressed as `longhorn-test-nfs-svc.default` for other pods in the cluster.
Open Longhorn UI, go to Setting, fill the Backup Target field with the URL above, click Save. Now you should able to use the backup feature of Longhorn.
The configuration yaml will be slight different for Google Kubernetes Engine (GKE):
1. GKE requires user to manually claim himself as cluster admin to enable RBAC. User need to execute following command before create the Longhorn system using yaml files.
### Volume can be attached/detached from UI, but Kubernetes Pod/Deployment etc cannot use it
Check if volume plugin directory has been set correctly.
By default, Kubernetes use `/usr/libexec/kubernetes/kubelet-plugins/volume/exec/` as the directory for volume plugin drivers, as stated in the [official document](https://github.com/kubernetes/community/blob/master/contributors/devel/flexvolume.md#prerequisites).
But some vendors may choose to change the directory due to various reasons. For example, GKE uses `/home/kubernetes/flexvolume` instead.
User can find the correct directory by running `ps aux|grep kubelet` on the host and check the `--volume-plugin-dir` parameter. If there is none, the default `/usr/libexec/kubernetes/kubelet-plugins/volume/exec/` will be used.