Update deploy example.yaml

Major updates:
1. Longhorn will now run in the `longhorn-system` namespace by default.
2. Improvement on Longhorn Driver includes dependency check when started and
automatic static-linked jq installation
3. Use ganesha as the NFS server for testing to remove the dependency of
`nfs-kernel-server` on the host.
This commit is contained in:
Sheng Yang 2018-01-15 17:16:18 -08:00
parent 2c6328d7cc
commit 1dd3618256
3 changed files with 121 additions and 71 deletions

View File

@ -4,7 +4,7 @@ Longhorn is a distributed block storage system built using containers and micros
Longhorn is lightweight, reliable, and easy-to-use. It is particularly suitable as persistent storage for containers. It supports snapshots, backups, and even allows you to schedule recurring snapshots and backups! Longhorn is lightweight, reliable, and easy-to-use. It is particularly suitable as persistent storage for containers. It supports snapshots, backups, and even allows you to schedule recurring snapshots and backups!
You can read more details of Longhorn and its design here: http://rancher.com/microservices-block-storage/. You can read more details of Longhorn and its design [here](http://rancher.com/microservices-block-storage/).
Longhorn is an experimental software. We appreciate your comments as we continue to work on it! Longhorn is an experimental software. We appreciate your comments as we continue to work on it!
@ -21,22 +21,20 @@ Longhorn is 100% open source software. Project source code is spread across a nu
1. Docker v1.13+ 1. Docker v1.13+
2. Kubernetes v1.8+ 2. Kubernetes v1.8+
3. Make sure `jq`, `curl`, `findmnt`, `grep`, `awk` and `blkid` has been installed in all nodes of the Kubernetes cluster. 3. Make sure `curl`, `findmnt`, `grep`, `awk` and `blkid` has been installed in all nodes of the Kubernetes cluster.
4. Make sure `open-iscsi` has been installed in all nodes of the Kubernetes cluster. For GKE, recommended Ubuntu as guest OS image since it contains `open-iscsi` already. 4. Make sure `open-iscsi` has been installed in all nodes of the Kubernetes cluster. For GKE, recommended Ubuntu as guest OS image since it contains `open-iscsi` already.
## Deployment ## Deployment
Create the deployment of Longhorn in your Kubernetes cluster is easy. For example, for GKE, you will only need to run `kubectl create -f deploy/example.yaml`. Create the deployment of Longhorn in your Kubernetes cluster is easy. For most Kubernetes setup (except GKE), you will only need to run `kubectl create -f deploy/example.yaml`.
The configuration yaml will be slight different for each environment, for example: For Google Kubernetes Engine (GKE) users, see [here](#google-kubernetes-engine) before proceed.
1. GKE requires user to manually claim himself as cluster admin to enable RBAC, using `kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin --user=<name@example.com>` (in which `name@example.com` is the user's account name in GCE, and it's case sensitive). See [here](https://cloud.google.com/kubernetes-engine/docs/how-to/role-based-access-control) for details. Longhorn Manager and Longhorn Driver will be deployed as daemonsets in a separate namespace called `longhorn-system`, as you can see in the yaml file.
2. The default Flexvolume plugin directory is different with GKE 1.8+, which is at `/home/kubernetes/flexvolume`. You can find it by running `ps aux|grep kubelet` on the host and check the `--flex-volume-plugin-dir` parameter. If there is none, the default `/usr/libexec/kubernetes/kubelet-plugins/volume/exec/` will be used.
Longhorn Manager and Longhorn Driver will be deployed as daemonsets, as you can see in the yaml file.
When you see those pods has started correctly as follows, you've deployed the Longhorn successfully. When you see those pods has started correctly as follows, you've deployed the Longhorn successfully.
``` ```
# kubectl -n longhorn-system get pod
NAME READY STATUS RESTARTS AGE NAME READY STATUS RESTARTS AGE
longhorn-driver-7b8l7 1/1 Running 0 3h longhorn-driver-7b8l7 1/1 Running 0 3h
longhorn-driver-tqrlw 1/1 Running 0 3h longhorn-driver-tqrlw 1/1 Running 0 3h
@ -48,11 +46,10 @@ longhorn-ui-76674c87b9-89swr 1/1 Running 0 3h
``` ```
## Access the UI ## Access the UI
Use `kubectl get svc` to get the external service IP for UI: Use `kubectl -n longhorn-system get svc` to get the external service IP for UI:
``` ```
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.20.240.1 <none> 443/TCP 9d
longhorn-backend ClusterIP 10.20.248.250 <none> 9500/TCP 58m longhorn-backend ClusterIP 10.20.248.250 <none> 9500/TCP 58m
longhorn-frontend LoadBalancer 10.20.245.110 100.200.200.123 80:30697/TCP 58m longhorn-frontend LoadBalancer 10.20.245.110 100.200.200.123 80:30697/TCP 58m
``` ```
@ -169,32 +166,59 @@ spec:
Longhorn supports backing up to a NFS server. In order to use this feature, you need to have a NFS server running and accessible in the Kubernetes cluster. Here we provides a simple way help to setup a testing NFS server. Longhorn supports backing up to a NFS server. In order to use this feature, you need to have a NFS server running and accessible in the Kubernetes cluster. Here we provides a simple way help to setup a testing NFS server.
### Requirements
1. Make sure `nfs-kernel-server` has been installed in all nodes of kubernetes.
### Deployment ### Deployment
Longhorn's backup feature requires an NFS server or an S3 endpoint. You can setup a simple NFS server on the same host and use that to store backups.
The deployment for the simple nfs server is also very easy.
``` ```
kubectl create -f deploy/example-backupstore.yaml kubectl create -f deploy/example-backupstore.yaml
``` ```
It will create a simple NFS server in the `default` namespace, which can be addressed as `longhorn-test-nfs-svc.default` for other pods in the cluster.
This NFS server won't save any data after you delete the Deployment. It's for development and testing only. WARNING: This NFS server won't save any data after you delete it. It's for development and testing only.
After this script completes, using the following URL as the Backup Target in the Longhorn setting: After this script completes, using the following URL as the Backup Target in the Longhorn setting:
``` ```
nfs://longhorn-nfs-svc:/opt/backupstore nfs://longhorn-test-nfs-svc.default:/opt/backupstore
``` ```
Open Longhorn UI, go to Setting, fill the Backup Target field with the URL above, click Save. Now you should able to use the backup feature of Longhorn. Open Longhorn UI, go to Setting, fill the Backup Target field with the URL above, click Save. Now you should able to use the backup feature of Longhorn.
## Google Kubernetes Engine
The configuration yaml will be slight different for Google Kubernetes Engine (GKE):
1. GKE requires user to manually claim himself as cluster admin to enable RBAC. User need to execute following command before create the Longhorn system using yaml files.
```
kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin --user=<name@example.com>
```
In which `name@example.com` is the user's account name in GCE, and it's case sensitive.
See [here](https://cloud.google.com/kubernetes-engine/docs/how-to/role-based-access-control) for details.
2. The default Flexvolume plugin directory is different with GKE 1.8+, which is at `/home/kubernetes/flexvolume`. User need to use
```
- name: flexvolume-longhorn-mount
hostPath:
path: /home/kubernetes/flexvolume/
```
instead of
```
- name: flexvolume-longhorn-mount
hostPath:
path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/
```
in the last part of the Longhorn system deployment yaml file.
See [Troubleshooting](#troubleshooting) for details.
## Troubleshooting
### Volume can be attached/detached from UI, but Kubernetes Pod/Deployment etc cannot use it
Check if volume plugin directory has been set correctly.
By default, Kubernetes use `/usr/libexec/kubernetes/kubelet-plugins/volume/exec/` as the directory for volume plugin drivers, as stated in the [official document](https://github.com/kubernetes/community/blob/master/contributors/devel/flexvolume.md#prerequisites).
But some vendors may choose to change the directory due to various reasons. For example, GKE uses `/home/kubernetes/flexvolume` instead.
User can find the correct directory by running `ps aux|grep kubelet` on the host and check the `--flex-volume-plugin-dir` parameter. If there is none, the default `/usr/libexec/kubernetes/kubelet-plugins/volume/exec/` will be used.
## License ## License
Copyright (c) 2014-2017 [Rancher Labs, Inc.](http://rancher.com) Copyright (c) 2014-2018 [Rancher Labs, Inc.](http://rancher.com)
Licensed under the Apache License, Version 2.0 (the "License"); Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License. you may not use this file except in compliance with the License.

View File

@ -1,39 +1,35 @@
apiVersion: extensions/v1beta1 apiVersion: v1
kind: Deployment kind: Pod
metadata: metadata:
name: longhorn-test-backupstore name: longhorn-test-nfs
labels: labels:
app: longhorn-nfs app: longhorn-test-nfs
spec: spec:
replicas: 1 containers:
template: - name: longhorn-test-nfs-container
metadata: image: janeczku/nfs-ganesha:latest
labels: imagePullPolicy: Always
app: longhorn-nfs env:
spec: - name: EXPORT_ID
containers: value: "14"
- name: longhorn-test-backupstore-pod - name: EXPORT_PATH
image: docker.io/erezhorev/dockerized_nfs_server value: /opt/backupstore
securityContext: - name: PSEUDO_PATH
privileged: true value: /opt/backupstore
ports: command: ["bash", "-c", "mkdir -p /opt/backupstore && /opt/start_nfs.sh"]
# dummy port to keep k8s happy securityContext:
- containerPort: 1111 capabilities:
name: longhorn-nfs add: ["SYS_ADMIN", "DAC_READ_SEARCH"]
args: ["/opt/backupstore"]
--- ---
kind: Service kind: Service
apiVersion: v1 apiVersion: v1
metadata: metadata:
labels: name: longhorn-test-nfs-svc
app: longhorn-nfs
name: longhorn-nfs-svc
spec: spec:
selector: selector:
app: longhorn-nfs app: longhorn-test-nfs
clusterIP: None clusterIP: None
ports: ports:
# dummy port to keep k8s happy - name: notnecessary
- name: longhorn-nfs port: 1234
port: 1111 targetPort: 1234
targetPort: longhorn-nfs

View File

@ -1,15 +1,13 @@
apiVersion: rbac.authorization.k8s.io/v1beta1 apiVersion: v1
kind: ClusterRoleBinding kind: Namespace
metadata:
name: longhorn-system
---
apiVersion: v1
kind: ServiceAccount
metadata: metadata:
name: longhorn-bind
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: longhorn-role
subjects:
- kind: ServiceAccount
name: longhorn-service-account name: longhorn-service-account
namespace: default namespace: longhorn-system
--- ---
apiVersion: rbac.authorization.k8s.io/v1beta1 apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole kind: ClusterRole
@ -25,6 +23,9 @@ rules:
- apiGroups: [""] - apiGroups: [""]
resources: ["pods"] resources: ["pods"]
verbs: ["*"] verbs: ["*"]
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["*"]
- apiGroups: ["longhorn.rancher.io"] - apiGroups: ["longhorn.rancher.io"]
resources: ["nodes"] resources: ["nodes"]
verbs: ["*"] verbs: ["*"]
@ -41,10 +42,18 @@ rules:
resources: ["controllers"] resources: ["controllers"]
verbs: ["*"] verbs: ["*"]
--- ---
apiVersion: v1 apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ServiceAccount kind: ClusterRoleBinding
metadata: metadata:
name: longhorn-bind
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: longhorn-role
subjects:
- kind: ServiceAccount
name: longhorn-service-account name: longhorn-service-account
namespace: longhorn-system
--- ---
apiVersion: extensions/v1beta1 apiVersion: extensions/v1beta1
kind: DaemonSet kind: DaemonSet
@ -52,6 +61,7 @@ metadata:
labels: labels:
app: longhorn-manager app: longhorn-manager
name: longhorn-manager name: longhorn-manager
namespace: longhorn-system
spec: spec:
template: template:
metadata: metadata:
@ -67,8 +77,7 @@ spec:
mountPath: /data/ mountPath: /data/
containers: containers:
- name: longhorn-manager - name: longhorn-manager
image: rancher/longhorn-manager:6c51e02 image: rancher/longhorn-manager:4d21cac
imagePullPolicy: Always
securityContext: securityContext:
privileged: true privileged: true
command: ["launch-manager", "-d", command: ["launch-manager", "-d",
@ -124,6 +133,7 @@ metadata:
labels: labels:
app: longhorn-manager app: longhorn-manager
name: longhorn-backend name: longhorn-backend
namespace: longhorn-system
spec: spec:
selector: selector:
app: longhorn-manager app: longhorn-manager
@ -139,6 +149,7 @@ metadata:
labels: labels:
app: longhorn-ui app: longhorn-ui
name: longhorn-ui name: longhorn-ui
namespace: longhorn-system
spec: spec:
replicas: 1 replicas: 1
template: template:
@ -148,8 +159,7 @@ spec:
spec: spec:
containers: containers:
- name: longhorn-ui - name: longhorn-ui
image: rancher/longhorn-ui:b161e3a image: rancher/longhorn-ui:99622cb
imagePullPolicy: IfNotPresent
ports: ports:
- containerPort: 8000 - containerPort: 8000
name: longhorn-ui name: longhorn-ui
@ -163,6 +173,7 @@ metadata:
labels: labels:
app: longhorn-ui app: longhorn-ui
name: longhorn-frontend name: longhorn-frontend
namespace: longhorn-system
spec: spec:
selector: selector:
app: longhorn-ui app: longhorn-ui
@ -177,6 +188,7 @@ apiVersion: extensions/v1beta1
kind: DaemonSet kind: DaemonSet
metadata: metadata:
name: longhorn-driver name: longhorn-driver
namespace: longhorn-system
spec: spec:
template: template:
metadata: metadata:
@ -184,8 +196,17 @@ spec:
labels: labels:
app: longhorn-driver app: longhorn-driver
spec: spec:
initContainers:
- name: init-container
image: rancher/longhorn-driver:4d21cac
securityContext:
privileged: true
command: ["/checkdependency.sh"]
volumeMounts:
- name: host-proc-mount
mountPath: /host/proc/
containers: containers:
- image: rancher/longhorn-driver:5260c7b - image: rancher/longhorn-driver:4d21cac
imagePullPolicy: Always imagePullPolicy: Always
name: longhorn-driver-container name: longhorn-driver-container
command: ["/entrypoint.sh"] command: ["/entrypoint.sh"]
@ -194,6 +215,8 @@ spec:
volumeMounts: volumeMounts:
- mountPath: /flexmnt - mountPath: /flexmnt
name: flexvolume-longhorn-mount name: flexvolume-longhorn-mount
- mountPath: /binmnt
name: usr-local-bin-mount
env: env:
- name: LONGHORN_BACKEND_SVC - name: LONGHORN_BACKEND_SVC
value: "longhorn-backend" value: "longhorn-backend"
@ -204,5 +227,12 @@ spec:
volumes: volumes:
- name: flexvolume-longhorn-mount - name: flexvolume-longhorn-mount
hostPath: hostPath:
path: /home/kubernetes/flexvolume path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/
#path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ #FOR GKE
#path: /home/kubernetes/flexvolume/
- name: usr-local-bin-mount
hostPath:
path: /usr/local/bin/
- name: host-proc-mount
hostPath:
path: /proc/