longhorn/README.md

242 lines
9.2 KiB
Markdown
Raw Normal View History

2017-04-14 23:16:53 +00:00
# Longhorn
2017-04-16 07:25:39 +00:00
Longhorn is a distributed block storage system built using containers and microservices. Longhorn creates a dedicated storage controller for each block device volume and sychronously replicates the volume across multiple replicas stored on multiple hosts. The storage controller and replicas are implemented using containers and are managed using a container orchestration system.
Longhorn is lightweight, reliable, and easy-to-use. It is particularly suitable as persistent storage for containers. It supports snapshots, backups, and even allows you to schedule recurring snapshots and backups!
2017-04-14 23:16:53 +00:00
You can read more details of Longhorn and its design [here](http://rancher.com/microservices-block-storage/).
2017-04-21 07:17:27 +00:00
Longhorn is an experimental software. We appreciate your comments as we continue to work on it!
2017-04-14 23:16:53 +00:00
2017-04-27 20:12:32 +00:00
## Source Code
2017-04-27 20:15:23 +00:00
Longhorn is 100% open source software. Project source code is spread across a number of repos:
2017-04-27 20:12:32 +00:00
1. Longhorn Engine -- Core controller/replica logic https://github.com/rancher/longhorn-engine
1. Longhorn Manager -- Longhorn orchestration, includes Flexvolume Driver for Kubernetes https://github.com/rancher/longhorn-manager
2017-04-27 20:12:32 +00:00
1. Longhorn UI -- Dashboard https://github.com/rancher/longhorn-ui
# Deploy in Kubernetes
## Requirements
1. Docker v1.13+
2. Kubernetes v1.8+
3. Make sure `curl`, `findmnt`, `grep`, `awk` and `blkid` has been installed in all nodes of the Kubernetes cluster.
2017-12-06 03:06:31 +00:00
4. Make sure `open-iscsi` has been installed in all nodes of the Kubernetes cluster. For GKE, recommended Ubuntu as guest OS image since it contains `open-iscsi` already.
## Deployment
Create the deployment of Longhorn in your Kubernetes cluster is easy. For most Kubernetes setup (except GKE), you will only need to run `kubectl create -f deploy/example.yaml`.
2017-04-27 20:12:32 +00:00
For Google Kubernetes Engine (GKE) users, see [here](#google-kubernetes-engine) before proceed.
2017-04-14 23:16:53 +00:00
Longhorn Manager and Longhorn Driver will be deployed as daemonsets in a separate namespace called `longhorn-system`, as you can see in the yaml file.
2017-04-14 23:16:53 +00:00
When you see those pods has started correctly as follows, you've deployed the Longhorn successfully.
2017-04-14 23:16:53 +00:00
```
# kubectl -n longhorn-system get pod
NAME READY STATUS RESTARTS AGE
longhorn-driver-7b8l7 1/1 Running 0 3h
longhorn-driver-tqrlw 1/1 Running 0 3h
longhorn-driver-xqkjg 1/1 Running 0 3h
longhorn-manager-67mqs 1/1 Running 0 3h
longhorn-manager-bxfw9 1/1 Running 0 3h
longhorn-manager-5kj2f 1/1 Running 0 3h
longhorn-ui-76674c87b9-89swr 1/1 Running 0 3h
```
## Access the UI
Use `kubectl -n longhorn-system get svc` to get the external service IP for UI:
2017-04-14 23:16:53 +00:00
```
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
longhorn-backend ClusterIP 10.20.248.250 <none> 9500/TCP 58m
longhorn-frontend LoadBalancer 10.20.245.110 100.200.200.123 80:30697/TCP 58m
2017-04-14 23:16:53 +00:00
```
Then user can use `EXTERNAL-IP`(`100.200.200.123` in the case above) of `longhorn-frontend` to access the Longhorn UI.
## How to use the Longhorn Volume in your pod
There are serveral ways to use the Longhorn volume.
### Pod with Longhorn volume
The following YAML file shows the definition of a pod that makes the Longhorn attach a volume to be used by the pod.
2017-04-14 23:16:53 +00:00
```
apiVersion: v1
kind: Pod
metadata:
name: volume-test
namespace: default
spec:
containers:
- name: volume-test
image: nginx
imagePullPolicy: IfNotPresent
volumeMounts:
- name: vol
mountPath: /data
ports:
- containerPort: 80
volumes:
- name: vol
flexVolume:
driver: "rancher.io/longhorn"
fsType: "ext4"
options:
size: "2G"
numberOfReplicas: "2"
staleReplicaTimeout: "20"
fromBackup: ""
2017-04-14 23:16:53 +00:00
```
Notice this field in YAML file `flexVolume.driver "rancher.io/longhorn"`. It specifies Longhorn FlexVolume plug-in shoule be used. There are some options fields in `options` user can fill.
Option | Required | Description
------------- | ----|---------
size | Yes | Specify the capacity of the volume in longhorn and the unit should be `G`
numberOfReplicas | Yes | The number of replica (HA feature) for volume in this Longhorn volume
fromBackup | No | In Longhorn Backup URL. Specify where user want to restore the volume from (Optional)
### Persistent Volume
This example shows how to use a YAML definition to manage Persistent Volume(PV).
```
apiVersion: v1
kind: PersistentVolume
metadata:
name: longhorn-volv-pv
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
storageClassName: longhorn
flexVolume:
driver: "rancher.io/longhorn"
fsType: "ext4"
options:
size: "2G"
numberOfReplicas: "2"
staleReplicaTimeout: "20"
fromBackup: ""
```
The next YAML shows a Persistent Volume Claim (PVC) that matched the PV defined above.
2017-04-14 23:16:53 +00:00
```
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: longhorn-volv-pvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: longhorn
resources:
requests:
storage: 2Gi
2017-04-14 23:16:53 +00:00
```
The claim can then be used by a pod in a YAML definition as shown below:
2017-04-14 23:16:53 +00:00
```
apiVersion: v1
kind: Pod
metadata:
name: volume-test
namespace: default
spec:
containers:
- name: volume-test
image: nginx
imagePullPolicy: IfNotPresent
volumeMounts:
- name: volv
mountPath: /data
ports:
- containerPort: 80
volumes:
- name: volv
persistentVolumeClaim:
claimName: longhorn-volv-pvc
2017-04-14 23:16:53 +00:00
```
## Setup a simple NFS server for storing backups
2017-04-14 23:16:53 +00:00
Longhorn supports backing up to a NFS server. In order to use this feature, you need to have a NFS server running and accessible in the Kubernetes cluster. Here we provides a simple way help to setup a testing NFS server.
2017-04-14 23:16:53 +00:00
### Deployment
2017-04-14 23:16:53 +00:00
```
kubectl create -f deploy/example-backupstore.yaml
2017-04-14 23:16:53 +00:00
```
It will create a simple NFS server in the `default` namespace, which can be addressed as `longhorn-test-nfs-svc.default` for other pods in the cluster.
2017-04-15 02:30:02 +00:00
WARNING: This NFS server won't save any data after you delete it. It's for development and testing only.
After this script completes, using the following URL as the Backup Target in the Longhorn setting:
```
nfs://longhorn-test-nfs-svc.default:/opt/backupstore
```
Open Longhorn UI, go to Setting, fill the Backup Target field with the URL above, click Save. Now you should able to use the backup feature of Longhorn.
2017-04-15 02:30:02 +00:00
## Google Kubernetes Engine
The configuration yaml will be slight different for Google Kubernetes Engine (GKE):
1. GKE requires user to manually claim himself as cluster admin to enable RBAC. User need to execute following command before create the Longhorn system using yaml files.
```
kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin --user=<name@example.com>
```
In which `name@example.com` is the user's account name in GCE, and it's case sensitive.
See [here](https://cloud.google.com/kubernetes-engine/docs/how-to/role-based-access-control) for details.
2017-04-16 07:25:39 +00:00
2. The default Flexvolume plugin directory is different with GKE 1.8+, which is at `/home/kubernetes/flexvolume`. User need to use
```
- name: flexvolume-longhorn-mount
hostPath:
path: /home/kubernetes/flexvolume/
```
instead of
```
- name: flexvolume-longhorn-mount
hostPath:
path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/
```
in the last part of the Longhorn system deployment yaml file.
See [Troubleshooting](#troubleshooting) for details.
2018-01-16 19:14:09 +00:00
## Uninstall Longhorn
Two commands will be needed to uninstall Longhorn from your Kubernetes cluster, since Kubernetes' `CustomResourceDefiniton` has been used.
```
kubectl delete -f deploy/example.yaml
kubectl delete crd -l longhorn-manager
```
## Troubleshooting
### Volume can be attached/detached from UI, but Kubernetes Pod/Deployment etc cannot use it
Check if volume plugin directory has been set correctly.
By default, Kubernetes use `/usr/libexec/kubernetes/kubelet-plugins/volume/exec/` as the directory for volume plugin drivers, as stated in the [official document](https://github.com/kubernetes/community/blob/master/contributors/devel/flexvolume.md#prerequisites).
But some vendors may choose to change the directory due to various reasons. For example, GKE uses `/home/kubernetes/flexvolume` instead.
User can find the correct directory by running `ps aux|grep kubelet` on the host and check the `--volume-plugin-dir` parameter. If there is none, the default `/usr/libexec/kubernetes/kubelet-plugins/volume/exec/` will be used.
2017-04-16 07:25:39 +00:00
## License
Copyright (c) 2014-2018 [Rancher Labs, Inc.](http://rancher.com)
2017-04-16 07:25:39 +00:00
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
[http://www.apache.org/licenses/LICENSE-2.0](http://www.apache.org/licenses/LICENSE-2.0)
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.