Update README.md and yaml for v0.3-rc1

This commit is contained in:
Sheng Yang 2018-08-01 20:38:51 -07:00
parent 4bceb56cec
commit 3757bf9e6c
10 changed files with 457 additions and 465 deletions

313
README.md
View File

@ -19,113 +19,116 @@ Longhorn is 100% open source software. Project source code is spread across a nu
[![Longhorn v0.2 Demo](https://asciinema.org/a/172720.png)](https://asciinema.org/a/172720?autoplay=1&loop=1&speed=2)
# Deploy on Kubernetes
# Requirements
## Requirements
## Minimal Requirements
1. Docker v1.13+
2. Kubernetes v1.8+
3. Make sure `curl`, `findmnt`, `grep`, `awk` and `blkid` has been installed in all nodes of the Kubernetes cluster.
4. Make sure `open-iscsi` has been installed in all nodes of the Kubernetes cluster. For GKE, recommended Ubuntu as guest OS image since it contains `open-iscsi` already.
3. Make sure open-iscsi has been installed in all nodes of the Kubernetes cluster. For GKE, recommended Ubuntu as guest OS image since it contains open-iscsi already.
## Deployment
Create the deployment of Longhorn in your Kubernetes cluster is easy. For most Kubernetes setup (except GKE), you will only need to run the following command to install Longhorn:
```
kubectl create -f https://raw.githubusercontent.com/rancher/longhorn/master/deploy/longhorn.yaml
```
## Kubernetes Driver Requirements
Longhorn can be used in Kubernetes to provide persistent storage through either Longhorn Container Storage Interface (CSI) driver or Longhorn Flexvolume driver. Longhorn will automatically deploy one of the drivers, depends on user's Kubernetes cluster's setup. User can also specify the driver in the deployment yaml file. CSI is preferred.
### Requirement for the CSI driver
1. Kubernetes v1.10+
1.1 CSI is in beta release for this version of Kubernetes, and enabled by default.
2. Mount Propagation feature gate enabled.
2.1 It's enabled by default in Kubernetes v1.10. But some early versions of RKE may not enable it.
3. If above conditions cannot be met, Longhorn will falls back to use Flexvolume driver.
### Requirement for the Flexvolume driver
1. Kubernetes v1.8+
2. Make sure `curl`, `findmnt`, `grep`, `awk` and `blkid` has been installed in the every node of the Kubernetes cluster.
3. User need to know the volume plugin directory in order to setup the driver correctly.
1. Rancher RKE: `/var/lib/kubelet/volumeplugins`
2. Google GKE: `/home/kubernetes/flexvolume`
3. For other distro, please find the correct directory by running `ps aux|grep kubelet` on the host and check the `--volume-plugin-dir` parameter. If there is none, it would be the default value `/usr/libexec/kubernetes/kubelet-plugins/volume/exec/` .
# Deployment
Create the deployment of Longhorn in your Kubernetes cluster is easy.
If you're using Rancher RKE, or other distro with Kubernetes v1.10+ and Mount Propagation enabled, you can just do:
```
kubectl create -f https://raw.githubusercontent.com/rancher/longhorn/rc/deploy/longhorn.yaml
```
If you're using Flexvolume driver with other Kubernetes Distro, replace the value of $FLEXVOLUME_DIR in the following command with your own Flexvolume Directory as specified above.
```
FLEXVOLUME_DIR="/home/kubernetes/flexvolume/"
curl -s https://raw.githubusercontent.com/rancher/longhorn/rc/deploy/longhorn.yaml|sed "s#^\( *\)value: \"/var/lib/kubelet/volumeplugins\"#\1value: \"${FLEXVOLUME_DIR}\"#g" > longhorn.yaml
kubectl create -f longhorn.yaml
```
For Google Kubernetes Engine (GKE) users, see [here](#google-kubernetes-engine) before proceed.
Longhorn Manager and Longhorn Driver will be deployed as daemonsets in a separate namespace called `longhorn-system`, as you can see in the yaml file.
When you see those pods has started correctly as follows, you've deployed the Longhorn successfully.
Deployed with CSI driver:
```
# kubectl -n longhorn-system get pod
NAME READY STATUS RESTARTS AGE
longhorn-flexvolume-driver-4dnx6 1/1 Running 0 1d
longhorn-flexvolume-driver-cqwj5 1/1 Running 0 1d
longhorn-flexvolume-driver-deployer-bc7b95b5b-sb9kr 1/1 Running 0 1d
longhorn-flexvolume-driver-q9h4f 1/1 Running 0 1d
longhorn-manager-dkdn9 1/1 Running 0 2h
longhorn-manager-l6npd 1/1 Running 0 2h
longhorn-manager-v4fz8 1/1 Running 0 2h
longhorn-ui-58796c68d-db4t6 1/1 Running 0 1h
csi-attacher-0 1/1 Running 0 6h
csi-provisioner-0 1/1 Running 0 6h
engine-image-ei-57b85e25-8v65d 1/1 Running 0 7d
engine-image-ei-57b85e25-gjjs6 1/1 Running 0 7d
engine-image-ei-57b85e25-t2787 1/1 Running 0 7d
longhorn-csi-plugin-4cpk2 2/2 Running 0 6h
longhorn-csi-plugin-ll6mq 2/2 Running 0 6h
longhorn-csi-plugin-smlsh 2/2 Running 0 6h
longhorn-driver-deployer-7b5bdcccc8-fbncl 1/1 Running 0 6h
longhorn-manager-7x8x8 1/1 Running 0 6h
longhorn-manager-8kqf4 1/1 Running 0 6h
longhorn-manager-kln4h 1/1 Running 0 6h
longhorn-ui-f849dcd85-cgkgg 1/1 Running 0 5d
```
Or with Flexvolume driver
```
# kubectl -n longhorn-system get pod
NAME READY STATUS RESTARTS AGE
engine-image-ei-57b85e25-8v65d 1/1 Running 0 7d
engine-image-ei-57b85e25-gjjs6 1/1 Running 0 7d
engine-image-ei-57b85e25-t2787 1/1 Running 0 7d
longhorn-driver-deployer-5469b87b9c-b9gm7 1/1 Running 0 2h
longhorn-flexvolume-driver-lth5g 1/1 Running 0 2h
longhorn-flexvolume-driver-tpqf7 1/1 Running 0 2h
longhorn-flexvolume-driver-v9mrj 1/1 Running 0 2h
longhorn-manager-7x8x8 1/1 Running 0 9h
longhorn-manager-8kqf4 1/1 Running 0 9h
longhorn-manager-kln4h 1/1 Running 0 9h
longhorn-ui-f849dcd85-cgkgg 1/1 Running 0 5d
```
## Access the UI
Use `kubectl -n longhorn-system get svc` to get the external service IP for UI:
```
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
longhorn-backend ClusterIP 10.20.248.250 <none> 9500/TCP 58m
longhorn-frontend LoadBalancer 10.20.245.110 100.200.200.123 80:30697/TCP 58m
```
If the Kubernetes Cluster supports creating LoadBalancer, user can then use `EXTERNAL-IP`(`100.200.200.123` in the case above) of `longhorn-frontend` to access the Longhorn UI. Otherwise the user can use `<node_ip>:<port>` (port is `30697` in the case above) to access the UI.
If the Kubernetes Cluster supports creating LoadBalancer, user can then use `EXTERNAL-IP`(`100.200.200.123` in the case above) of `longhorn-frontend` to access the Longhorn UI. Otherwise the user can use `<node_ip>:<port>` (port is `30697`in the case above) to access the UI.
Longhorn UI would connect to the Longhorn Manager API, provides the overview of the system, the volume operations, and the snapshot/backup operations. It's highly recommended for the user to check out Longhorn UI.
Notice the current UI is unauthenticated.
## How to use the Longhorn Volume in your pod
# Use the Longhorn with Kubernetes
There are serveral ways to use the Longhorn volume.
Longhorn provides persistent volume directly to Kubernetes through one of the Longhorn drivers. No matter which driver you're using, you can use Kubernetes StorageClass to provision your persistent volumes.
### Pod with Longhorn volume
The following YAML file shows the definition of a pod that makes the Longhorn attach a volume to be used by the pod.
Use following command to create a default Longhorn StorageClass named `longhorn`.
```
apiVersion: v1
kind: Pod
metadata:
name: volume-test
namespace: default
spec:
containers:
- name: volume-test
image: nginx:stable-alpine
imagePullPolicy: IfNotPresent
volumeMounts:
- name: voll
mountPath: /data
ports:
- containerPort: 80
volumes:
- name: voll
flexVolume:
driver: "rancher.io/longhorn"
fsType: "ext4"
options:
size: "2Gi"
numberOfReplicas: "3"
staleReplicaTimeout: "20"
fromBackup: ""
kubectl create -f https://raw.githubusercontent.com/rancher/longhorn/rc/deploy/example-storageclass.yaml
```
Notice this field in the YAML file: `flexVolume.driver "rancher.io/longhorn"`. It specifies that the Longhorn FlexVolume plug-in should be used. There are some option fields in `options` the user can fill in.
Option | Required | Description
------------- | ----|---------
size | Yes | Specify the capacity of the volume in longhorn and the unit should be `G`
numberOfReplicas | Yes | The number of replicas (HA feature) for volume in this Longhorn volume
fromBackup | No | Optional. Must be a Longhorn Backup URL. Specify where the user want to restore the volume from.
### Storage class
Longhorn supports dynamic provisioner function, which can create PV automatically for the user according to the spec of storage class and PVC. The user needs to create a new storage class in order to use it. The storage class example can be downloaded from [here](./deploy/example-storageclass.yaml)
```
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: longhorn
provisioner: rancher.io/longhorn
parameters:
numberOfReplicas: "3"
staleReplicaTimeout: "30"
fromBackup: ""
```
Then user can create a PVC directly. For example:
```
apiVersion: v1
@ -163,59 +166,144 @@ spec:
persistentVolumeClaim:
claimName: longhorn-volv-pvc
```
More examples are available at `./examples/`
## Setup a TESTING ONLY NFS server for storing backups
# Feature Usage
### Snapshot
A snapshot in Longhorn represents a volume state at a given time, stored in the same location of volume data on physical disk of the host. Snapshot creation is instant in Longhorn.
Longhorn supports backing up mechanisms to export the user data out of the Longhorn system. Currently Longhorn supports backing up to a NFS server. In order to use this feature, you need to have a NFS server running and accessible in the Kubernetes cluster. Here we provide a simple way to setup a testing NFS server.
User can revert to any previous taken snapshot using the UI. Since Longhorn is a distributed block storage, please make sure the Longhorn volume is umounted from the host when revert to any previous snapshot, otherwise it will confuse the node filesystem and cause corruption.
### Backup
A backup in Longhorn represents a volume state at a given time, stored in the BackupStore which is outside of the Longhorn System. Backup creation will involving copying the data through the network, so it will take time.
WARNING: This NFS server won't save any data after you delete it. It's for TESTING ONLY.
A corresponding snapshot is needed for creating a backup. And user can choose to backup any snapshot previous created.
A BackupStore is a NFS server or S3 compatible server.
A BackupTarget represents a BackupStore in the Longhorn System. The BackupTarget can be set at `Settings/General/BackupTarget`
If user is using a S3 compatible server as the BackupTarget, the BackupTargetSecret is needed for authentication informations. User need to manually create it as a Kubernetes Secret in the `longhorn-system` namespace. See below for details.
#### Setup a testing backupstore
We provides two testing purpose backupstore based on NFS server and Minio S3 server for testing, in `./deploy/backupstores`.
Use following command to setup a Minio S3 server for BackupStore after `longhorn-system` was created.
```
kubectl create -f deploy/example-backupstore.yaml
```
It will create a simple NFS server in the `default` namespace, which can be addressed as `longhorn-test-nfs-svc.default` for other pods in the cluster.
After this script completes, using the following URL as the Backup Target in the Longhorn setting:
```
nfs://longhorn-test-nfs-svc.default:/opt/backupstore
```
Open Longhorn UI, go to Setting, fill the Backup Target field with the URL above, click Save. Now you should able to use the backup feature of Longhorn.
## Google Kubernetes Engine
The configuration yaml will be slight different for Google Kubernetes Engine (GKE):
1. GKE requires user to manually claim himself as cluster admin to enable RBAC. User need to execute following command before create the Longhorn system using yaml files.
```
kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin --user=<name@example.com>
```
In which `name@example.com` is the user's account name in GCE, and it's case sensitive.
See [here](https://cloud.google.com/kubernetes-engine/docs/how-to/role-based-access-control) for details.
2. The default Flexvolume plugin directory is different with GKE 1.8+, which is at `/home/kubernetes/flexvolume`. User need to use following command instead:
```
kubectl create -f https://raw.githubusercontent.com/rancher/longhorn/master/deploy/longhorn-gke.yaml
kubectl -f https://raw.githubusercontent.com/rancher/longhorn/rc/deploy/backupstores/minio-backupstore.yaml
```
User can also customerize the Flexvolume directory in the last part of the Longhorn system deployment yaml file, e.g.:
Now set `Settings/General/BackupTarget` to
```
- name: FLEXVOLUME_DIR
value: "/home/kubernetes/flexvolume/"
http://minio-service.default:9000
```
And `Setttings/General/BackupTargetSecret` to
```
minio-secret
```
Click the `Backup` tab in the UI, it should report an empty list without error out.
See [Troubleshooting](#troubleshooting) for details.
### Recurring Snapshot and Backup
Longhorn supports recurring snapshot and backup for volumes. User only need to set when he/she wish to take the snapshot and/or backup, and how many snapshots/backups needs to be retains, then Longhorn will automatically create snapshot/backup for the user at that time, as long as the volume is attached to a node.
User can find the setting for the recurring snapshot and backup in the `Volume Detail` page.
### Multiple disks support
Longhorn supports to use more than one disk on the nodes to store the volume data.
To add a new disk for a node, heading to `Node` tab, select one of the node, and click the edit disk icon.
By default, `/var/lib/rancher/longhorn` on the host will be used for storing the volume data.
To add any additional disks, user needs to:
1. Mount the disk on the host to a certain directory.
2. Add the path of the mounted disk into the disk list of the node.
Longhorn will detect the storage information (e.g. maximum space, available space) about the disk automatically, and start scheduling to it if it's possible to accomodate the volume in there. A path mounted by the existing disk won't be allowed.
User can reserve a certain amount of space of the disk to stop Longhorn from using it. It can be set in the `Space Reserved` field for the disk. It's useful for the non-dedicated storage disk on the node.
Nodes and disks can be excluded from future scheduling. Notice any scheduled storage space won't be released automatically if the scheduling was disabled for the node.
There are two global settings affect the scheduling of the volume as well.
`StorageOverProvisioningPercentage` defines the upper bound of `ScheduledStorage / (MaximumStorage - ReservedStorage)` . The default value is `500` (%). That means we can schedule a total of 750 GiB Longhorn volumes on a 200 GiB disk with 50G reserved for the root file system. Because normally people won't use that large amount of data in the volume, and we store the volumes as sparse files.
`StorageMinimalAvailablePercentage` defines when a disk cannot be scheduled with more volumes. The default value is `10` (%). The bigger value between `MaximumStorage * StorageMinimalAvailablePercentage / 100` and `MaximumStorage - ReservedStorage` will be used to determine if a disk is running low and cannot be scheduled with more volumes.
Notice currently there is no guarantee that the space volumes used won't exceed the `StorageMinimalAvailablePercentage`, because:
1. Longhorn volume can be bigger than specified size, due to the snapshot contains the old state of the volume
2. And Longhorn is doing over-provisioning by default.
## Uninstall Longhorn
In order to uninstall Longhorn, user need to remove all the volumes first:
Longhorn CRD has finalizers in them, so user should delete the volumes and related resource first, give manager a chance to clean up after them.
### 1. Clean up volume and related resources
```
kubectl -n longhorn-system delete lhv --all
kubectl -n longhorn-system delete volumes.longhorn.rancher.io --all
```
After confirming all the volumes are removed, then Longhorn can be easily uninstalled using:
Check the result using:
```
kubectl delete -f https://raw.githubusercontent.com/rancher/longhorn/master/deploy/longhorn.yaml
kubectl -n longhorn-system get volumes.longhorn.rancher.io
kubectl -n longhorn-system get engines.longhorn.rancher.io
kubectl -n longhorn-system get replicas.longhorn.rancher.io
```
Make sure all reports `No resources found.` before continuing.
### 2. Clean up engine images and nodes
```
kubectl -n longhorn-system delete engineimages.longhorn.rancher.io --all
kubectl -n longhorn-system delete nodes.longhorn.rancher.io --all
```
Check the result using:
```
kubectl -n longhorn-system get engineimages.longhorn.rancher.io
kubectl -n longhorn-system get nodes.longhorn.rancher.io
```
Make sure all reports `No resources found.` before continuing.
### 3. Uninstall Longhorn System
```
kubectl delete -f https://raw.githubusercontent.com/rancher/longhorn/rc/deploy/longhorn.yaml
```
## Notes
### Google Kubernetes Engine
The configuration yaml will be slight different for Google Kubernetes Engine (GKE):
1. GKE requires user to manually claim himself as cluster admin to enable RBAC. User need to execute following command before create the Longhorn system using yaml files.
```
kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin --user=<name@example.com>
```
In which `name@example.com` is the user's account name in GCE, and it's case sensitive. See [here](https://cloud.google.com/kubernetes-engine/docs/how-to/role-based-access-control) for details.
2. The default Flexvolume plugin directory is different with GKE 1.8+, which is at `/home/kubernetes/flexvolume`. User need to use following command instead:
```
FLEXVOLUME_DIR="/home/kubernetes/flexvolume/"
curl -s https://raw.githubusercontent.com/rancher/longhorn/rc/deploy/longhorn.yaml|sed "s#^\( *\)value: \"/var/lib/kubelet/volumeplugins\"#\1value: \"${FLEXVOLUME_DIR}\"#g" > longhorn.yaml
kubectl create -f longhorn.yaml
```
See [Troubleshooting](#troubleshooting) for details.
## Troubleshooting
### Volume can be attached/detached from UI, but Kubernetes Pod/StatefulSet etc cannot use it
@ -226,19 +314,14 @@ By default, Kubernetes use `/usr/libexec/kubernetes/kubelet-plugins/volume/exec/
But some vendors may choose to change the directory due to various reasons. For example, GKE uses `/home/kubernetes/flexvolume`, and RKE uses `/var/lib/kubelet/volumeplugins`.
User can find the correct directory by running `ps aux|grep kubelet` on the host and check the `--volume-plugin-dir` parameter. If there is none, the default `/usr/libexec/kubernetes/kubelet-plugins/volume/exec/` will be used.
User can find the correct directory by running `ps aux|grep kubelet` on the host and check the `--volume-plugin-dir`parameter. If there is none, the default `/usr/libexec/kubernetes/kubelet-plugins/volume/exec/` will be used.
## License
Copyright (c) 2014-2018 [Rancher Labs, Inc.](http://rancher.com)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
Copyright (c) 2014-2018 [Rancher Labs, Inc.](http://rancher.com/)
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
[http://www.apache.org/licenses/LICENSE-2.0](http://www.apache.org/licenses/LICENSE-2.0)
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

View File

@ -0,0 +1,65 @@
apiVersion: v1
kind: Secret
metadata:
name: minio-secret
type: Opaque
data:
AWS_ACCESS_KEY_ID: bG9uZ2hvcm4tdGVzdC1hY2Nlc3Mta2V5 # longhorn-test-access-key
AWS_SECRET_ACCESS_KEY: bG9uZ2hvcm4tdGVzdC1zZWNyZXQta2V5 # longhorn-test-secret-key
AWS_ENDPOINTS: aHR0cDovL21pbmlvLXNlcnZpY2UuZGVmYXVsdDo5MDAw # http://minio-service.default:9000
---
# same secret for longhorn-system namespace
apiVersion: v1
kind: Secret
metadata:
name: minio-secret
namespace: longhorn-system
type: Opaque
data:
AWS_ACCESS_KEY_ID: bG9uZ2hvcm4tdGVzdC1hY2Nlc3Mta2V5 # longhorn-test-access-key
AWS_SECRET_ACCESS_KEY: bG9uZ2hvcm4tdGVzdC1zZWNyZXQta2V5 # longhorn-test-secret-key
AWS_ENDPOINTS: aHR0cDovL21pbmlvLXNlcnZpY2UuZGVmYXVsdDo5MDAw # http://minio-service.default:9000
---
apiVersion: v1
kind: Pod
metadata:
name: longhorn-test-minio
labels:
app: longhorn-test-minio
spec:
volumes:
- name: minio-volume
emptyDir: {}
containers:
- name: minio
image: minio/minio
command: ["sh", "-c", "mkdir -p /storage/backupbucket && exec /usr/bin/minio server /storage"]
env:
- name: MINIO_ACCESS_KEY
valueFrom:
secretKeyRef:
name: minio-secret
key: AWS_ACCESS_KEY_ID
- name: MINIO_SECRET_KEY
valueFrom:
secretKeyRef:
name: minio-secret
key: AWS_SECRET_ACCESS_KEY
ports:
- containerPort: 9000
volumeMounts:
- name: minio-volume
mountPath: "/storage"
---
apiVersion: v1
kind: Service
metadata:
name: minio-service
spec:
selector:
app: longhorn-test-minio
ports:
- port: 9000
targetPort: 9000
protocol: TCP
sessionAffinity: ClientIP

View File

@ -5,6 +5,9 @@ metadata:
labels:
app: longhorn-test-nfs
spec:
volumes:
- name: nfs-volume
emptyDir: {}
containers:
- name: longhorn-test-nfs-container
image: janeczku/nfs-ganesha:latest
@ -16,10 +19,19 @@ spec:
value: /opt/backupstore
- name: PSEUDO_PATH
value: /opt/backupstore
command: ["bash", "-c", "mkdir -p /opt/backupstore && /opt/start_nfs.sh"]
command: ["bash", "-c", "chmod 700 /opt/backupstore && /opt/start_nfs.sh | tee /var/log/ganesha.log"]
securityContext:
privileged: true
capabilities:
add: ["SYS_ADMIN", "DAC_READ_SEARCH"]
volumeMounts:
- name: nfs-volume
mountPath: "/opt/backupstore"
livenessProbe:
exec:
command: ["bash", "-c", "grep \"No export entries found\" /var/log/ganesha.log > /dev/null 2>&1 ; [ $? -ne 0 ]"]
initialDelaySeconds: 5
periodSeconds: 5
---
kind: Service
apiVersion: v1

View File

@ -1,302 +0,0 @@
apiVersion: v1
kind: Namespace
metadata:
name: longhorn-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: longhorn-service-account
namespace: longhorn-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: longhorn-role
rules:
- apiGroups:
- apiextensions.k8s.io
resources:
- customresourcedefinitions
verbs:
- "*"
- apiGroups: [""]
resources: ["pods", "events", "persistentvolumes", "persistentvolumeclaims", "nodes", "proxy/nodes"]
verbs: ["*"]
- apiGroups: ["extensions"]
resources: ["daemonsets"]
verbs: ["*"]
- apiGroups: ["batch"]
resources: ["jobs", "cronjobs"]
verbs: ["*"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["*"]
- apiGroups: ["longhorn.rancher.io"]
resources: ["nodes"]
verbs: ["*"]
- apiGroups: ["longhorn.rancher.io"]
resources: ["volumes"]
verbs: ["*"]
- apiGroups: ["longhorn.rancher.io"]
resources: ["engines"]
verbs: ["*"]
- apiGroups: ["longhorn.rancher.io"]
resources: ["replicas"]
verbs: ["*"]
- apiGroups: ["longhorn.rancher.io"]
resources: ["settings"]
verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: longhorn-bind
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: longhorn-role
subjects:
- kind: ServiceAccount
name: longhorn-service-account
namespace: longhorn-system
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
labels:
longhorn-manager: Engine
name: engines.longhorn.rancher.io
spec:
group: longhorn.rancher.io
names:
kind: Engine
listKind: EngineList
plural: engines
shortNames:
- lhe
singular: engine
scope: Namespaced
version: v1alpha1
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
labels:
longhorn-manager: Replica
name: replicas.longhorn.rancher.io
spec:
group: longhorn.rancher.io
names:
kind: Replica
listKind: ReplicaList
plural: replicas
shortNames:
- lhr
singular: replica
scope: Namespaced
version: v1alpha1
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
labels:
longhorn-manager: Setting
name: settings.longhorn.rancher.io
spec:
group: longhorn.rancher.io
names:
kind: Setting
listKind: SettingList
plural: settings
shortNames:
- lhs
singular: setting
scope: Namespaced
version: v1alpha1
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
labels:
longhorn-manager: Volume
name: volumes.longhorn.rancher.io
spec:
group: longhorn.rancher.io
names:
kind: Volume
listKind: VolumeList
plural: volumes
shortNames:
- lhv
singular: volume
scope: Namespaced
version: v1alpha1
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
labels:
app: longhorn-manager
name: longhorn-manager
namespace: longhorn-system
spec:
template:
metadata:
labels:
app: longhorn-manager
spec:
initContainers:
- name: init-container
image: rancher/longhorn-engine:de88734
command: ['sh', '-c', 'cp /usr/local/bin/* /data/']
volumeMounts:
- name: execbin
mountPath: /data/
containers:
- name: longhorn-manager
image: rancher/longhorn-manager:010fe60
imagePullPolicy: Always
securityContext:
privileged: true
command:
- longhorn-manager
- -d
- daemon
- --engine-image
- rancher/longhorn-engine:de88734
- --manager-image
- rancher/longhorn-manager:010fe60
- --service-account
- longhorn-service-account
ports:
- containerPort: 9500
volumeMounts:
- name: dev
mountPath: /host/dev/
- name: proc
mountPath: /host/proc/
- name: varrun
mountPath: /var/run/
- name: longhorn
mountPath: /var/lib/rancher/longhorn/
- name: execbin
mountPath: /usr/local/bin/
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
volumes:
- name: dev
hostPath:
path: /dev/
- name: proc
hostPath:
path: /proc/
- name: varrun
hostPath:
path: /var/run/
- name: longhorn
hostPath:
path: /var/lib/rancher/longhorn/
- name: execbin
emptyDir: {}
serviceAccountName: longhorn-service-account
---
kind: Service
apiVersion: v1
metadata:
labels:
app: longhorn-manager
name: longhorn-backend
namespace: longhorn-system
spec:
selector:
app: longhorn-manager
ports:
- port: 9500
targetPort: 9500
sessionAffinity: ClientIP
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: longhorn-ui
name: longhorn-ui
namespace: longhorn-system
spec:
replicas: 1
template:
metadata:
labels:
app: longhorn-ui
spec:
containers:
- name: longhorn-ui
image: rancher/longhorn-ui:1455f4f
ports:
- containerPort: 8000
env:
- name: LONGHORN_MANAGER_IP
value: "http://longhorn-backend:9500"
---
kind: Service
apiVersion: v1
metadata:
labels:
app: longhorn-ui
name: longhorn-frontend
namespace: longhorn-system
spec:
selector:
app: longhorn-ui
ports:
- port: 80
targetPort: 8000
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: longhorn-flexvolume-driver-deployer
namespace: longhorn-system
spec:
replicas: 1
template:
metadata:
labels:
app: longhorn-flexvolume-driver-deployer
spec:
containers:
- name: longhorn-flexvolume-driver-deployer
image: rancher/longhorn-manager:010fe60
imagePullPolicy: Always
command:
- longhorn-manager
- -d
- deploy-flexvolume-driver
- --manager-image
- rancher/longhorn-manager:010fe60
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: FLEXVOLUME_DIR
value: "/home/kubernetes/flexvolume/"
serviceAccountName: longhorn-service-account
---

View File

@ -21,31 +21,22 @@ rules:
verbs:
- "*"
- apiGroups: [""]
resources: ["pods", "events", "persistentvolumes", "persistentvolumeclaims", "nodes", "proxy/nodes"]
resources: ["pods", "events", "persistentvolumes", "persistentvolumeclaims", "nodes", "proxy/nodes", "pods/log", "secrets", "services"]
verbs: ["*"]
- apiGroups: ["extensions"]
resources: ["daemonsets"]
- apiGroups: [""]
resources: ["namespaces"]
verbs: ["get", "list"]
- apiGroups: ["apps"]
resources: ["daemonsets", "statefulsets"]
verbs: ["*"]
- apiGroups: ["batch"]
resources: ["jobs", "cronjobs"]
verbs: ["*"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
resources: ["storageclasses", "volumeattachments"]
verbs: ["*"]
- apiGroups: ["longhorn.rancher.io"]
resources: ["nodes"]
verbs: ["*"]
- apiGroups: ["longhorn.rancher.io"]
resources: ["volumes"]
verbs: ["*"]
- apiGroups: ["longhorn.rancher.io"]
resources: ["engines"]
verbs: ["*"]
- apiGroups: ["longhorn.rancher.io"]
resources: ["replicas"]
verbs: ["*"]
- apiGroups: ["longhorn.rancher.io"]
resources: ["settings"]
resources: ["volumes", "engines", "replicas", "settings", "engineimages", "nodes"]
verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
@ -133,7 +124,43 @@ spec:
scope: Namespaced
version: v1alpha1
---
apiVersion: extensions/v1beta1
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
labels:
longhorn-manager: EngineImage
name: engineimages.longhorn.rancher.io
spec:
group: longhorn.rancher.io
names:
kind: EngineImage
listKind: EngineImageList
plural: engineimages
shortNames:
- lhei
singular: engineimage
scope: Namespaced
version: v1alpha1
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
labels:
longhorn-manager: Node
name: nodes.longhorn.rancher.io
spec:
group: longhorn.rancher.io
names:
kind: Node
listKind: NodeList
plural: nodes
shortNames:
- lhn
singular: node
scope: Namespaced
version: v1alpha1
---
apiVersion: apps/v1beta2
kind: DaemonSet
metadata:
labels:
@ -141,21 +168,17 @@ metadata:
name: longhorn-manager
namespace: longhorn-system
spec:
selector:
matchLabels:
app: longhorn-manager
template:
metadata:
labels:
app: longhorn-manager
spec:
initContainers:
- name: init-container
image: rancher/longhorn-engine:de88734
command: ['sh', '-c', 'cp /usr/local/bin/* /data/']
volumeMounts:
- name: execbin
mountPath: /data/
containers:
- name: longhorn-manager
image: rancher/longhorn-manager:010fe60
image: rancher/longhorn-manager:06a81b9
imagePullPolicy: Always
securityContext:
privileged: true
@ -164,9 +187,9 @@ spec:
- -d
- daemon
- --engine-image
- rancher/longhorn-engine:de88734
- rancher/longhorn-engine:31c42f0
- --manager-image
- rancher/longhorn-manager:010fe60
- rancher/longhorn-manager:06a81b9
- --service-account
- longhorn-service-account
ports:
@ -180,8 +203,7 @@ spec:
mountPath: /var/run/
- name: longhorn
mountPath: /var/lib/rancher/longhorn/
- name: execbin
mountPath: /usr/local/bin/
mountPropagation: Bidirectional
env:
- name: POD_NAMESPACE
valueFrom:
@ -208,8 +230,6 @@ spec:
- name: longhorn
hostPath:
path: /var/lib/rancher/longhorn/
- name: execbin
emptyDir: {}
serviceAccountName: longhorn-service-account
---
kind: Service
@ -227,7 +247,7 @@ spec:
targetPort: 9500
sessionAffinity: ClientIP
---
apiVersion: extensions/v1beta1
apiVersion: apps/v1beta2
kind: Deployment
metadata:
labels:
@ -236,6 +256,9 @@ metadata:
namespace: longhorn-system
spec:
replicas: 1
selector:
matchLabels:
app: longhorn-ui
template:
metadata:
labels:
@ -243,7 +266,7 @@ spec:
spec:
containers:
- name: longhorn-ui
image: rancher/longhorn-ui:1455f4f
image: rancher/longhorn-ui:47e0b2a
ports:
- containerPort: 8000
env:
@ -265,28 +288,40 @@ spec:
targetPort: 8000
type: LoadBalancer
---
apiVersion: extensions/v1beta1
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: longhorn-flexvolume-driver-deployer
name: longhorn-driver-deployer
namespace: longhorn-system
spec:
replicas: 1
selector:
matchLabels:
app: longhorn-driver-deployer
template:
metadata:
labels:
app: longhorn-flexvolume-driver-deployer
app: longhorn-driver-deployer
spec:
initContainers:
- name: wait-longhorn-manager
image: rancher/longhorn-manager:06a81b9
command: ['sh', '-c', 'while [ $(curl -m 1 -s -o /dev/null -w "%{http_code}" http://longhorn-backend:9500/v1) != "200" ]; do echo waiting; sleep 2; done']
containers:
- name: longhorn-flexvolume-driver-deployer
image: rancher/longhorn-manager:010fe60
- name: longhorn-driver-deployer
image: rancher/longhorn-manager:06a81b9
imagePullPolicy: Always
command:
- longhorn-manager
- -d
- deploy-flexvolume-driver
- deploy-driver
- --manager-image
- rancher/longhorn-manager:010fe60
- rancher/longhorn-manager:06a81b9
- --manager-url
- http://longhorn-backend:9500/v1
# manually choose "flexvolume" or "csi"
#- --driver
#- flexvolume
env:
- name: POD_NAMESPACE
valueFrom:
@ -296,9 +331,17 @@ spec:
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: SERVICE_ACCOUNT
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
- name: FLEXVOLUME_DIR
value: ""
#FOR GKE
value: "/var/lib/kubelet/volumeplugins"
# FOR RKE
#value: "/var/lib/kubelet/volumeplugins"
# FOR GKE
#value: "/home/kubernetes/flexvolume/"
# For default or auto detection with Kubernetes <= v1.8
#value: ""
serviceAccountName: longhorn-service-account
---

View File

@ -0,0 +1,50 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: longhorn-vol-pv
spec:
capacity:
storage: 2Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
csi:
driver: io.rancher.longhorn
fsType: ext4
volumeAttributes:
numberOfReplicas: '3'
staleReplicaTimeout: '30'
volumeHandle: existing-longhorn-volume
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: longhorn-vol-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
volumeName: longhorn-vol-pv
---
apiVersion: v1
kind: Pod
metadata:
name: volume-test
namespace: default
spec:
containers:
- name: volume-test
image: nginx:stable-alpine
imagePullPolicy: IfNotPresent
volumeMounts:
- name: vol
mountPath: /data
ports:
- containerPort: 80
volumes:
- name: vol
persistentVolumeClaim:
claimName: longhorn-vol-pvc

View File

@ -0,0 +1,41 @@
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: longhorn
provisioner: rancher.io/longhorn
parameters:
numberOfReplicas: '3'
staleReplicaTimeout: '30'
reclaimPolicy: Delete
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: longhorn-vol-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
storageClassName: longhorn
---
apiVersion: v1
kind: Pod
metadata:
name: volume-test
namespace: default
spec:
containers:
- name: volume-test
image: nginx:stable-alpine
imagePullPolicy: IfNotPresent
volumeMounts:
- name: vol
mountPath: /data
ports:
- containerPort: 80
volumes:
- name: vol
persistentVolumeClaim:
claimName: longhorn-vol-pvc