K8s support (#22)

* Update to reflect k8s deployment

* Fix a typo

* Update README.md
This commit is contained in:
Sheng Yang 2017-12-06 10:37:00 +08:00 committed by GitHub
parent 8154c91419
commit 1c346eb3f6
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
9 changed files with 484 additions and 343 deletions

215
README.md
View File

@ -6,82 +6,192 @@ Longhorn is lightweight, reliable, and easy-to-use. It is particularly suitable
You can read more details of Longhorn and its design here: http://rancher.com/microservices-block-storage/.
Longhorn is experimental software. We appreciate your comments as we continue to work on it!
Longhorn is an experimental software. We appreciate your comments as we continue to work on it!
## Source Code
Longhorn is 100% open source software. Project source code is spread across a number of repos:
1. Longhorn engine -- core controller/replica logic https://github.com/rancher/longhorn-engine
1. Longhorn manager -- Longhorn orchestration https://github.com/rancher/longhorn-manager
1. Longhorn Engine -- Core controller/replica logic https://github.com/rancher/longhorn-engine
1. Longhorn Manager -- Longhorn orchestration, includes Flexvolume Driver for Kubernetes https://github.com/rancher/longhorn-manager
1. Longhorn UI -- Dashboard https://github.com/rancher/longhorn-ui
1. Longhorn storage driver -- Docker driver. we're working on a PR to [Rancher Storage](http://github.com/rancher/storage), will update later.
#### Build your own Longhorn
In order to build your own longhorn, you need to build a couple of separate components as stated above.
Building process has been described in each component above.
Each component will produce a Docker image at the end of building process. You can use it to swap the correlated lines in the [deploying script](https://github.com/rancher/longhorn/blob/master/deploy/longhorn-deploy-node.sh#L5) to test your own build.
# Deploy in Kubernetes
## Requirements
Longhorn requires one or more hosts running the following software:
1. Docker v1.13+
2. Kubernetes v1.8+
3. Make sure `jq`, `curl`, `findmnt`, `grep`, `awk` and `blkid` has been installed in all nodes of the Kubernetes cluster.
4. Make sure `open-iscsi` has been installed in all nodes of the Kubernetes cluster.
1. We have tested with Ubuntu 16.04. Other Linux distros, including CentOS and RancherOS, will be tested in the future.
2. Make sure `open-iscsi` package is installed on the host. If `open-iscsi` package is installed, the `iscsiadm` executable should be available. Ubuntu Server install by default includes `open-iscsi`. Ubuntu Desktop doesn't.
## Deployment
Create the deployment of Longhorn in your Kubernetes cluster is easy. For example, for GKE, you will only need to run `kubectl create -f deploy/example.yaml`.
## Single node setup
The configuration yaml will be slight different for each environment, for example:
You can setup all the components required to run Longhorn on a single Linux host. In this case Longhorn will create multiple replicas for the same volume on the same host. This is therefore not a production-grade setup.
1. GKE requires user to manually claim himself as cluster admin to enable RBAC, using `kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin --user=<name@example.com>` (in which `name@example.com` is the user's account name in GCE, and it's case sensitive). See [here](https://cloud.google.com/kubernetes-engine/docs/how-to/role-based-access-control) for details.
2. The default Flexvolume plugin directory is different with GKE 1.8+, which is at `/home/kubernetes/flexvolume`. You can find it by running `ps aux|grep kubelet` on the host and check the `--flex-volume-plugin-dir` parameter. If there is none, the default `/usr/libexec/kubernetes/kubelet-plugins/volume/exec/` will be used.
Longhorn Manager and Longhorn Driver will be deployed as daemonsets, as you can see in the yaml file.
When you see those pods has started correctly as follows, you've deployed the Longhorn successfully.
You can setup Longhorn by running a single script:
```
git clone https://github.com/rancher/longhorn
cd longhorn/deploy
./longhorn-setup-single-node-env.sh
NAME READY STATUS RESTARTS AGE
longhorn-driver-7b8l7 1/1 Running 0 3h
longhorn-driver-tqrlw 1/1 Running 0 3h
longhorn-driver-xqkjg 1/1 Running 0 3h
longhorn-manager-67mqs 1/1 Running 0 3h
longhorn-manager-bxfw9 1/1 Running 0 3h
longhorn-manager-5kj2f 1/1 Running 0 3h
longhorn-ui-76674c87b9-89swr 1/1 Running 0 3h
```
The script will setup all the components required to run Longhorn, including the etcd server, longhorn-manager, and longhorn-ui automatically.
After the script completes, it produces output like this:
```
Longhorn is up at port 8080
```
Congratulations! Now you have Longhorn running on the host and can access the UI at `http://<host_ip>:8080`.
## Access the UI
Use `kubectl get svc` to get the external service IP for UI:
```
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.20.240.1 <none> 443/TCP 9d
longhorn-backend ClusterIP 10.20.248.250 <none> 9500/TCP 58m
longhorn-frontend LoadBalancer 10.20.245.110 100.200.200.123 80:30697/TCP 58m
```
Then user can use `EXTERNAL-IP`(`100.200.200.123` in the case above) of `longhorn-frontend` to access the Longhorn UI.
## How to use the Longhorn Volume in your pod
There are serveral ways to use the Longhorn volume.
### Pod with Longhorn volume
The following YAML file shows the definition of a pod that makes the Longhorn attach a volume to be used by the pod.
```
apiVersion: v1
kind: Pod
metadata:
name: volume-test
namespace: default
spec:
containers:
- name: volume-test
image: nginx
imagePullPolicy: IfNotPresent
volumeMounts:
- name: vol
mountPath: /data
ports:
- containerPort: 80
volumes:
- name: vol
flexVolume:
driver: "rancher.io/longhorn"
fsType: "ext4"
options:
size: "2G"
numberOfReplicas: "2"
staleReplicaTimeout: "20"
fromBackup: ""
```
Notice this field in YAML file `flexVolume.driver "rancher.io/longhorn"`. It specifies Longhorn FlexVolume plug-in shoule be used. There are some options fields in `options` user can fill.
Option | Required | Description
------------- | ----|---------
size | Yes | Specify the capacity of the volume in longhorn and the unit should be `G`
numberOfReplicas | Yes | The number of replica (HA feature) for volume in this Longhorn volume
fromBackup | No | In Longhorn Backup URL. Specify where user want to restore the volume from (Optional)
### Persistent Volume
This example shows how to use a YAML definition to manage Persistent Volume(PV).
```
apiVersion: v1
kind: PersistentVolume
metadata:
name: longhorn-volv-pv
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
storageClassName: longhorn
flexVolume:
driver: "rancher.io/longhorn"
fsType: "ext4"
options:
size: "2G"
numberOfReplicas: "2"
staleReplicaTimeout: "20"
fromBackup: ""
```
The next YAML shows a Persistent Volume Claim (PVC) that matched the PV defined above.
```
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: longhorn-volv-pvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: longhorn
resources:
requests:
storage: 2Gi
```
The claim can then be used by a pod in a YAML definition as shown below:
```
apiVersion: v1
kind: Pod
metadata:
name: volume-test
namespace: default
spec:
containers:
- name: volume-test
image: nginx
imagePullPolicy: IfNotPresent
volumeMounts:
- name: volv
mountPath: /data
ports:
- containerPort: 80
volumes:
- name: volv
persistentVolumeClaim:
claimName: longhorn-volv-pvc
```
## Setup a simple NFS server for storing backups
Longhorn supports backing up to a NFS server. In order to use this feature, you need to have a NFS server running and accessible in the Kubernetes cluster. Here we provides a simple way help to setup a testing NFS server.
### Requirements
1. Make sure `nfs-kernel-server` has been installed in all nodes of kubernetes.
### Deployment
### Setup a simple NFS server for storing backups
Longhorn's backup feature requires an NFS server or an S3 endpoint. You can setup a simple NFS server on the same host and use that to store backups.
```
# Make sure you have nfs-kernel-server package installed.
sudo apt-get install nfs-kernel-server
./deploy-simple-nfs.sh
```
This NFS server won't save any data after you delete the container. It's for development and testing only.
After this script completes, you will see:
The deployment for the simple nfs server is also very easy.
```
Use the following URL as the Backup Target in the Longhorn UI:
nfs://10.0.0.5:/opt/backupstore
```
Open Longhorn UI, go to `Setting`, fill the `Backup Target` field with the URL above, click `Save`. Now you should able to use the backup feature of Longhorn.
## Create a Longhorn volume from Docker CLI
You can now create a persistent Longhorn volume from Docker CLI using the Longhorn volume driver and use the volume in Docker containers.
Docker volume driver is `longhorn`.
You can run the following on any of the Longhorn hosts:
```
docker volume create -d longhorn vol1
docker run -it --volume-driver longhorn -v vol1:/vol1 ubuntu bash
kubectl create -f deploy/example-backupstore.yaml
```
## Multi-host setup
This NFS server won't save any data after you delete the Deployment. It's for development and testing only.
Single-host setup is not suitable for production use. You can find instructions for multi-host setup here: https://github.com/rancher/longhorn/wiki/Multi-Host-Setup-Guide
After this script completes, using the following URL as the Backup Target in the Longhorn setting:
```
nfs://longhorn-nfs-svc:/opt/backupstore
```
Open Longhorn UI, go to Setting, fill the Backup Target field with the URL above, click Save. Now you should able to use the backup feature of Longhorn.
## License
Copyright (c) 2014-2017 [Rancher Labs, Inc.](http://rancher.com)
@ -97,4 +207,3 @@ distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -1,39 +0,0 @@
#!/bin/bash
cleanup(){
name=$1
set +e
echo clean up ${name} if exists
docker rm -vf ${name} > /dev/null 2>&1
set -e
}
get_container_ip() {
container=$1
for i in `seq 1 5`
do
ip=`docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' $container`
if [ "$ip" != "" ]
then
break
fi
sleep 10
done
if [ "$ip" == "" ]
then
echo cannot find ip for $container
exit -1
fi
echo $ip
}
validate_ip() {
ip=$1
rx='([1-9]?[0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])'
if [[ $ip =~ ^$rx\.$rx\.$rx\.$rx$ ]]; then
return 0
fi
echo Invalid ip address ${ip}
return 1
}

View File

@ -1,58 +0,0 @@
#!/bin/bash
set -e
echo MAKE SURE you have \"nfs-kernel-server\" installed on the host before starting this NFS server
echo Press Ctrl-C to bail out in 3 seconds
sleep 3
echo WARNING: This NFS server won\'t save any data after you delete the container
sleep 1
source ./common.sh
while [[ $# -gt 1 ]]
do
key="$1"
case $key in
-n|--network)
network="$2"
shift # past argument
;;
*)
# unknown
# option
echo ${USAGE}
break
;;
esac
shift
done
NFS_SERVER=longhorn-nfs-server
NFS_IMAGE=docker.io/erezhorev/dockerized_nfs_server
BACKUPSTORE_PATH=/opt/backupstore
network_option=
if [ "$network" != "" ]; then
network_option="--network ${network}"
fi
docker run -d \
--name ${NFS_SERVER} \
${network_option} \
--privileged \
${NFS_IMAGE} ${BACKUPSTORE_PATH}
nfs_ip=$(get_container_ip ${NFS_SERVER})
echo NFS server is up
echo
echo Set following URL as the Backup Target in the Longhorn:
echo
echo nfs://${nfs_ip}:${BACKUPSTORE_PATH}
echo

View File

@ -0,0 +1,39 @@
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: longhorn-test-backupstore
labels:
app: longhorn-nfs
spec:
replicas: 1
template:
metadata:
labels:
app: longhorn-nfs
spec:
containers:
- name: longhorn-test-backupstore-pod
image: docker.io/erezhorev/dockerized_nfs_server
securityContext:
privileged: true
ports:
# dummy port to keep k8s happy
- containerPort: 1111
name: longhorn-nfs
args: ["/opt/backupstore"]
---
kind: Service
apiVersion: v1
metadata:
labels:
app: longhorn-nfs
name: longhorn-nfs-svc
spec:
selector:
app: longhorn-nfs
clusterIP: None
ports:
# dummy port to keep k8s happy
- name: longhorn-nfs
port: 1111
targetPort: longhorn-nfs

208
deploy/example.yaml Normal file
View File

@ -0,0 +1,208 @@
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: longhorn-bind
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: longhorn-role
subjects:
- kind: ServiceAccount
name: longhorn-service-account
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: longhorn-role
rules:
- apiGroups:
- apiextensions.k8s.io
resources:
- customresourcedefinitions
verbs:
- "*"
- apiGroups: [""]
resources: ["pods"]
verbs: ["*"]
- apiGroups: ["longhorn.rancher.io"]
resources: ["nodes"]
verbs: ["*"]
- apiGroups: ["longhorn.rancher.io"]
resources: ["volumes"]
verbs: ["*"]
- apiGroups: ["longhorn.rancher.io"]
resources: ["replicas"]
verbs: ["*"]
- apiGroups: ["longhorn.rancher.io"]
resources: ["settings"]
verbs: ["*"]
- apiGroups: ["longhorn.rancher.io"]
resources: ["controllers"]
verbs: ["*"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: longhorn-service-account
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
labels:
app: longhorn-manager
name: longhorn-manager
spec:
template:
metadata:
labels:
app: longhorn-manager
spec:
initContainers:
- name: init-container
image: rancher/longhorn-engine:17e33fc
command: ['sh', '-c', 'cp /usr/local/bin/* /data/']
volumeMounts:
- name: execbin
mountPath: /data/
containers:
- name: longhorn-manager
image: rancher/longhorn-manager:6c51e02
imagePullPolicy: Always
securityContext:
privileged: true
command: ["launch-manager", "-d",
"--orchestrator", "kubernetes",
"--engine-image", "rancher/longhorn-engine:17e33fc"]
ports:
- containerPort: 9500
name: manager
volumeMounts:
- name: dev
mountPath: /host/dev/
- name: proc
mountPath: /host/proc/
- name: varrun
mountPath: /var/run/
- name: longhorn
mountPath: /var/lib/rancher/longhorn/
- name: execbin
mountPath: /usr/local/bin/
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
volumes:
- name: dev
hostPath:
path: /dev/
- name: proc
hostPath:
path: /proc/
- name: varrun
hostPath:
path: /var/run/
- name: longhorn
hostPath:
path: /var/lib/rancher/longhorn/
- name: execbin
emptyDir: {}
serviceAccountName: longhorn-service-account
---
kind: Service
apiVersion: v1
metadata:
labels:
app: longhorn-manager
name: longhorn-backend
spec:
selector:
app: longhorn-manager
ports:
- name: manager
port: 9500
targetPort: manager
sessionAffinity: ClientIP
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: longhorn-ui
name: longhorn-ui
spec:
replicas: 1
template:
metadata:
labels:
app: longhorn-ui
spec:
containers:
- name: longhorn-ui
image: rancher/longhorn-ui:b161e3a
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8000
name: longhorn-ui
env:
- name: LONGHORN_MANAGER_IP
value: "http://longhorn-backend:9500"
---
kind: Service
apiVersion: v1
metadata:
labels:
app: longhorn-ui
name: longhorn-frontend
spec:
selector:
app: longhorn-ui
ports:
- name: longhorn-ui
port: 80
targetPort: longhorn-ui
type: LoadBalancer
sessionAffinity: ClientIP
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: longhorn-driver
spec:
template:
metadata:
name: longhorn-driver
labels:
app: longhorn-driver
spec:
containers:
- image: rancher/longhorn-driver:5260c7b
imagePullPolicy: Always
name: longhorn-driver-container
command: ["/entrypoint.sh"]
securityContext:
privileged: true
volumeMounts:
- mountPath: /flexmnt
name: flexvolume-longhorn-mount
env:
- name: LONGHORN_BACKEND_SVC
value: "longhorn-backend"
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
volumes:
- name: flexvolume-longhorn-mount
hostPath:
path: /home/kubernetes/flexvolume
#path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/

View File

@ -1,130 +0,0 @@
#!/bin/bash
set -e
LONGHORN_ENGINE_IMAGE="rancher/longhorn-engine:046b5a5"
LONGHORN_MANAGER_IMAGE="rancher/longhorn-manager:57f92f0"
LONGHORN_DRIVER_IMAGE="rancher/storage-longhorn:11a4f5a"
LONGHORN_UI_IMAGE="rancher/longhorn-ui:b09b215"
source ./common.sh
USAGE="Usage: $(basename $0) -e \<etcd_ip\> [-n \<network\> -p \<ui_port\>]"
while [[ $# -gt 1 ]]
do
key="$1"
case $key in
-e|--etcd-ip)
etcd_ip="$2"
shift # past argument
;;
-n|--network)
network="$2"
shift # past argument
;;
-p|--ui-port)
port="$2"
shift # past argument
;;
*)
# unknown
# option
echo ${USAGE}
break
;;
esac
shift
done
if [ "$etcd_ip" == "" ]; then
echo ${USAGE}
exit 1
fi
# will error out if fail since we have set -e
validate_ip ${etcd_ip}
network_option=
if [ "$network" != "" ]; then
network_option="--network ${network}"
fi
ui_port=8080
if [ "$port" != "" ]; then
ui_port=$port
fi
set +e
iscsiadm_check=`iscsiadm --version > /dev/null 2>&1`
if [ $? -ne 0 ]; then
echo Cannot find \`iscsiadm\` on the host, please install \`open-iscsi\` package
exit 1
fi
set -e
LONGHORN_ENGINE_BINARY_NAME="longhorn-engine-binary"
LONGHORN_MANAGER_NAME="longhorn-manager"
LONGHORN_DRIVER_NAME="longhorn-driver"
LONGHORN_UI_NAME="longhorn-ui"
# longhorn-binary first, provides binary to longhorn-manager
cleanup ${LONGHORN_ENGINE_BINARY_NAME}
docker run --name ${LONGHORN_ENGINE_BINARY_NAME} \
--network none \
${LONGHORN_ENGINE_IMAGE} \
/bin/bash
echo ${LONGHORN_ENGINE_BINARY_NAME} is ready
# now longhorn-manager
cleanup ${LONGHORN_MANAGER_NAME}
docker run -d \
--name ${LONGHORN_MANAGER_NAME} \
${network_option} \
--restart=on-failure:5 \
--privileged \
--uts host \
-v /dev:/host/dev \
-v /var/run:/var/run \
-v /var/lib/rancher/longhorn:/var/lib/rancher/longhorn \
--volumes-from ${LONGHORN_ENGINE_BINARY_NAME} \
${LONGHORN_MANAGER_IMAGE} \
launch-manager -d \
--orchestrator docker \
--engine-image ${LONGHORN_ENGINE_IMAGE} \
--etcd-servers http://${etcd_ip}:2379
echo ${LONGHORN_MANAGER_NAME} is ready
# finally longhorn-driver
cleanup ${LONGHORN_DRIVER_NAME}
docker run -d \
--name ${LONGHORN_DRIVER_NAME} \
--restart=on-failure:5 \
--network none \
--privileged \
-v /run:/run \
-v /var/run:/var/run \
-v /dev:/host/dev \
-v /var/lib/rancher/volumes:/var/lib/rancher/volumes:shared \
${LONGHORN_DRIVER_IMAGE}
echo ${LONGHORN_DRIVER_NAME} is ready
manager_ip=$(get_container_ip ${LONGHORN_MANAGER_NAME})
cleanup ${LONGHORN_UI_NAME}
docker run -d \
--name ${LONGHORN_UI_NAME} \
--restart=on-failure:5 \
${network_option} \
-p ${ui_port}:8000/tcp \
-e LONGHORN_MANAGER_IP=http://${manager_ip}:9500 \
${LONGHORN_UI_IMAGE}
echo ${LONGHORN_UI_NAME} is ready
echo
echo Longhorn is up at port ${ui_port}

View File

@ -1,63 +0,0 @@
#!/bin/bash
set -e
source ./common.sh
USAGE="Usage: $(basename $0) [-p \<ui_port\>]"
while [[ $# -gt 1 ]]
do
key="$1"
case $key in
-p|--ui-port)
port="$2"
shift # past argument
;;
-n|--network)
network="$2"
shift # past argument
;;
*)
# unknown
# option
echo ${USAGE}
break
;;
esac
shift
done
options=
if [ "$port" != "" ]; then
options="${options} -p $port"
fi
network_option=
if [ "$network" != "" ]; then
options="${options} -n ${network}"
network_option="--network ${network}"
fi
ETCD_SERVER=longhorn-etcd-server
ETCD_IMAGE=quay.io/coreos/etcd:v3.1.5
cleanup $ETCD_SERVER
docker run -d \
--name $ETCD_SERVER \
--volume /etcd-data \
${network_option} \
$ETCD_IMAGE \
/usr/local/bin/etcd \
--name longhorn-etcd-server \
--data-dir /tmp/etcd-data:/etcd-data \
--listen-client-urls http://0.0.0.0:2379 \
--advertise-client-urls http://0.0.0.0:2379
etcd_ip=$(get_container_ip $ETCD_SERVER)
echo etcd server is up at ${etcd_ip}
echo
./longhorn-deploy-node.sh -e ${etcd_ip} ${options}

25
examples/example.yaml Normal file
View File

@ -0,0 +1,25 @@
apiVersion: v1
kind: Pod
metadata:
name: volume-test
namespace: default
spec:
containers:
- name: volume-test
image: nginx
imagePullPolicy: IfNotPresent
volumeMounts:
- name: voll
mountPath: /data
ports:
- containerPort: 80
volumes:
- name: voll
flexVolume:
driver: "rancher.io/longhorn"
fsType: "ext4"
options:
size: "2G"
numberOfReplicas: "2"
staleReplicaTimeout: "20"
fromBackup: ""

50
examples/example_pv.yaml Normal file
View File

@ -0,0 +1,50 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: longhorn-volv-pv
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
storageClassName: longhorn
flexVolume:
driver: "rancher.io/longhorn"
fsType: "ext4"
options:
size: "2G"
numberOfReplicas: "2"
staleReplicaTimeout: "20"
fromBackup: ""
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: longhorn-volv-pvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: longhorn
resources:
requests:
storage: 2Gi
---
apiVersion: v1
kind: Pod
metadata:
name: volume-test
namespace: default
spec:
containers:
- name: volume-test
image: nginx
imagePullPolicy: IfNotPresent
volumeMounts:
- name: volv
mountPath: /data
ports:
- containerPort: 80
volumes:
- name: volv
persistentVolumeClaim:
claimName: longhorn-volv-pvc