Merge pull request #257 from rancher/v0.3-rc

Merge v0.3-rc branch
This commit is contained in:
Sheng Yang 2018-08-23 14:26:25 -07:00 committed by GitHub
commit dbde8d78b4
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
20 changed files with 1594 additions and 479 deletions

359
README.md
View File

@ -8,125 +8,185 @@ You can read more details of Longhorn and its design [here](http://rancher.com/m
Longhorn is a work in progress. We appreciate your comments as we continue to work on it! Longhorn is a work in progress. We appreciate your comments as we continue to work on it!
## Source Code ## Source code
Longhorn is 100% open source software. Project source code is spread across a number of repos: Longhorn is 100% open source software. Project source code is spread across a number of repos:
1. Longhorn Engine -- Core controller/replica logic https://github.com/rancher/longhorn-engine 1. Longhorn engine -- Core controller/replica logic https://github.com/rancher/longhorn-engine
1. Longhorn Manager -- Longhorn orchestration, includes Flexvolume driver for Kubernetes https://github.com/rancher/longhorn-manager 1. Longhorn manager -- Longhorn orchestration, includes Flexvolume driver for Kubernetes https://github.com/rancher/longhorn-manager
1. Longhorn UI -- Dashboard https://github.com/rancher/longhorn-ui 1. Longhorn UI -- Dashboard https://github.com/rancher/longhorn-ui
# Demo # Demo
[![Longhorn v0.2 Demo](https://asciinema.org/a/172720.png)](https://asciinema.org/a/172720?autoplay=1&loop=1&speed=2) [![Longhorn v0.2 Demo](https://asciinema.org/a/172720.png)](https://asciinema.org/a/172720?autoplay=1&loop=1&speed=2)
# Deploy on Kubernetes # Requirements
## Requirements ## Minimal Requirements
1. Docker v1.13+ 1. Docker v1.13+
2. Kubernetes v1.8+ 2. Kubernetes v1.8+
3. Make sure `curl`, `findmnt`, `grep`, `awk` and `blkid` has been installed in all nodes of the Kubernetes cluster. 3. Make sure open-iscsi has been installed in all nodes of the Kubernetes cluster. For GKE, recommended Ubuntu as guest OS image since it contains open-iscsi already.
4. Make sure `open-iscsi` has been installed in all nodes of the Kubernetes cluster. For GKE, recommended Ubuntu as guest OS image since it contains `open-iscsi` already.
## Deployment ## Kubernetes driver Requirements
Create the deployment of Longhorn in your Kubernetes cluster is easy. For most Kubernetes setup (except GKE), you will only need to run the following command to install Longhorn:
```
kubectl create -f https://raw.githubusercontent.com/rancher/longhorn/master/deploy/longhorn.yaml
```
Longhorn can be used in Kubernetes to provide persistent storage through either Longhorn Container Storage Interface (CSI) driver or Longhorn FlexVolume driver. Longhorn will automatically deploy one of the drivers, depending on the Kubernetes cluster configuration. User can also specify the driver in the deployment yaml file. CSI is preferred.
### Environment check script
We've wrote a script to help user to get enough information to configure the setup correctly.
Before installing, run:
```
curl -sSfL https://raw.githubusercontent.com/rancher/longhorn/v0.3-rc/scripts/environment_check.sh | bash
```
Example result:
```
pod "detect-flexvol-dir" created
daemonset.apps "longhorn-environment-check" created
waiting for pod/detect-flexvol-dir to finish
pod/detect-flexvol-dir completed
all pods ready (3/3)
FLEXVOLUME_DIR="/home/kubernetes/flexvolume"
MountPropagation is enabled!
cleaning up detection workloads...
pod "detect-flexvol-dir" deleted
daemonset.apps "longhorn-environment-check" deleted
clean up completed
```
Please make a note of `Flexvolume Path` and `MountPropagation` state above.
### Requirement for the CSI driver
1. Kubernetes v1.10+
1. CSI is in beta release for this version of Kubernetes, and enabled by default.
2. Mount propagation feature gate enabled.
1. It's enabled by default in Kubernetes v1.10. But some early versions of RKE may not enable it.
3. If above conditions cannot be met, Longhorn will fall back to the FlexVolume driver.
### Check if your setup satisfied CSI requirement
1. Use the following command to check your Kubernetes server version
```
kubectl version
```
Result:
```
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:17:39Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.1", GitCommit:"d4ab47518836c750f9949b9e0d387f20fb92260b", GitTreeState:"clean", BuildDate:"2018-04-12T14:14:26Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
```
The `Server Version` should be `v1.10` or above.
2. The result of environment check script should contain `MountPropagation is enabled!`.
### Requirement for the Flexvolume driver
1. Kubernetes v1.8+
2. Make sure `curl`, `findmnt`, `grep`, `awk` and `blkid` has been installed in the every node of the Kubernetes cluster.
3. User need to know the volume plugin directory in order to setup the driver correctly.
1. The correct directory should be reported by the environment check script.
2. Rancher RKE: `/var/lib/kubelet/volumeplugins`
3. Google GKE: `/home/kubernetes/flexvolume`
4. For any other distro, use the value reported by the environment check script.
# Upgrading
For instructions on how to upgrade Longhorn App v0.1 or v0.2 to v0.3, [see this document](docs/upgrade.md#upgrade).
# Deployment
Create the deployment of Longhorn in your Kubernetes cluster is straightforward.
If CSI is supported (as stated above) you can just do:
```
kubectl apply -f https://raw.githubusercontent.com/rancher/longhorn/v0.3-rc/deploy/longhorn.yaml
```
If you're using Flexvolume driver with Kubernetes Distro other than RKE, replace the value of $FLEXVOLUME_DIR in the following command with your own Flexvolume Directory as specified above.
```
FLEXVOLUME_DIR=<FLEXVOLUME_DIR>
```
Then run
```
curl -s https://raw.githubusercontent.com/rancher/longhorn/v0.3-rc/deploy/longhorn.yaml|sed "s#^\( *\)value: \"/var/lib/kubelet/volumeplugins\"#\1value: \"${FLEXVOLUME_DIR}\"#g" > longhorn.yaml
kubectl apply -f longhorn.yaml
```
For Google Kubernetes Engine (GKE) users, see [here](#google-kubernetes-engine) before proceed. For Google Kubernetes Engine (GKE) users, see [here](#google-kubernetes-engine) before proceed.
Longhorn Manager and Longhorn Driver will be deployed as daemonsets in a separate namespace called `longhorn-system`, as you can see in the yaml file. Longhorn manager and Longhorn driver will be deployed as daemonsets in a separate namespace called `longhorn-system`, as you can see in the yaml file.
When you see those pods has started correctly as follows, you've deployed the Longhorn successfully. When you see those pods has started correctly as follows, you've deployed the Longhorn successfully.
Deployed with CSI driver:
``` ```
# kubectl -n longhorn-system get pod # kubectl -n longhorn-system get pod
NAME READY STATUS RESTARTS AGE NAME READY STATUS RESTARTS AGE
longhorn-flexvolume-driver-4dnx6 1/1 Running 0 1d csi-attacher-0 1/1 Running 0 6h
longhorn-flexvolume-driver-cqwj5 1/1 Running 0 1d csi-provisioner-0 1/1 Running 0 6h
longhorn-flexvolume-driver-deployer-bc7b95b5b-sb9kr 1/1 Running 0 1d engine-image-ei-57b85e25-8v65d 1/1 Running 0 7d
longhorn-flexvolume-driver-q9h4f 1/1 Running 0 1d engine-image-ei-57b85e25-gjjs6 1/1 Running 0 7d
longhorn-manager-dkdn9 1/1 Running 0 2h engine-image-ei-57b85e25-t2787 1/1 Running 0 7d
longhorn-manager-l6npd 1/1 Running 0 2h longhorn-csi-plugin-4cpk2 2/2 Running 0 6h
longhorn-manager-v4fz8 1/1 Running 0 2h longhorn-csi-plugin-ll6mq 2/2 Running 0 6h
longhorn-ui-58796c68d-db4t6 1/1 Running 0 1h longhorn-csi-plugin-smlsh 2/2 Running 0 6h
longhorn-driver-deployer-7b5bdcccc8-fbncl 1/1 Running 0 6h
longhorn-manager-7x8x8 1/1 Running 0 6h
longhorn-manager-8kqf4 1/1 Running 0 6h
longhorn-manager-kln4h 1/1 Running 0 6h
longhorn-ui-f849dcd85-cgkgg 1/1 Running 0 5d
```
Or with Flexvolume driver
```
# kubectl -n longhorn-system get pod
NAME READY STATUS RESTARTS AGE
engine-image-ei-57b85e25-8v65d 1/1 Running 0 7d
engine-image-ei-57b85e25-gjjs6 1/1 Running 0 7d
engine-image-ei-57b85e25-t2787 1/1 Running 0 7d
longhorn-driver-deployer-5469b87b9c-b9gm7 1/1 Running 0 2h
longhorn-flexvolume-driver-lth5g 1/1 Running 0 2h
longhorn-flexvolume-driver-tpqf7 1/1 Running 0 2h
longhorn-flexvolume-driver-v9mrj 1/1 Running 0 2h
longhorn-manager-7x8x8 1/1 Running 0 9h
longhorn-manager-8kqf4 1/1 Running 0 9h
longhorn-manager-kln4h 1/1 Running 0 9h
longhorn-ui-f849dcd85-cgkgg 1/1 Running 0 5d
``` ```
## Access the UI ## Access the UI
Use `kubectl -n longhorn-system get svc` to get the external service IP for UI: Use `kubectl -n longhorn-system get svc` to get the external service IP for UI:
``` ```
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
longhorn-backend ClusterIP 10.20.248.250 <none> 9500/TCP 58m longhorn-backend ClusterIP 10.20.248.250 <none> 9500/TCP 58m
longhorn-frontend LoadBalancer 10.20.245.110 100.200.200.123 80:30697/TCP 58m longhorn-frontend LoadBalancer 10.20.245.110 100.200.200.123 80:30697/TCP 58m
``` ```
If the Kubernetes Cluster supports creating LoadBalancer, user can then use `EXTERNAL-IP`(`100.200.200.123` in the case above) of `longhorn-frontend` to access the Longhorn UI. Otherwise the user can use `<node_ip>:<port>` (port is `30697`in the case above) to access the UI. If the Kubernetes Cluster supports creating LoadBalancer, user can then use `EXTERNAL-IP`(`100.200.200.123` in the case above) of `longhorn-frontend` to access the Longhorn UI. Otherwise the user can use `<node_ip>:<port>` (port is `30697`in the case above) to access the UI.
Longhorn UI would connect to the Longhorn Manager API, provides the overview of the system, the volume operations, and the snapshot/backup operations. It's highly recommended for the user to check out Longhorn UI. Longhorn UI would connect to the Longhorn manager API, provides the overview of the system, the volume operations, and the snapshot/backup operations. It's highly recommended for the user to check out Longhorn UI.
Notice the current UI is unauthenticated. Noted that the current UI is unauthenticated.
## How to use the Longhorn Volume in your pod # Use Longhorn with Kubernetes
There are serveral ways to use the Longhorn volume. Longhorn provides the persistent volume directly to Kubernetes through one of the Longhorn drivers. No matter which driver you're using, you can use Kubernetes StorageClass to provision your persistent volumes.
### Pod with Longhorn volume Use following command to create a default Longhorn StorageClass named `longhorn`.
The following YAML file shows the definition of a pod that makes the Longhorn attach a volume to be used by the pod.
``` ```
apiVersion: v1 kubectl apply -f https://raw.githubusercontent.com/rancher/longhorn/v0.3-rc/examples/storageclass.yaml
kind: Pod
metadata:
name: volume-test
namespace: default
spec:
containers:
- name: volume-test
image: nginx:stable-alpine
imagePullPolicy: IfNotPresent
volumeMounts:
- name: voll
mountPath: /data
ports:
- containerPort: 80
volumes:
- name: voll
flexVolume:
driver: "rancher.io/longhorn"
fsType: "ext4"
options:
size: "2Gi"
numberOfReplicas: "3"
staleReplicaTimeout: "20"
fromBackup: ""
``` ```
Notice this field in the YAML file: `flexVolume.driver "rancher.io/longhorn"`. It specifies that the Longhorn FlexVolume plug-in should be used. There are some option fields in `options` the user can fill in. Now you can create a pod using Longhorn like this:
Option | Required | Description
------------- | ----|---------
size | Yes | Specify the capacity of the volume in longhorn and the unit should be `G`
numberOfReplicas | Yes | The number of replicas (HA feature) for volume in this Longhorn volume
fromBackup | No | Optional. Must be a Longhorn Backup URL. Specify where the user want to restore the volume from.
### Storage class
Longhorn supports dynamic provisioner function, which can create PV automatically for the user according to the spec of storage class and PVC. The user needs to create a new storage class in order to use it. The storage class example can be downloaded from [here](./deploy/example-storageclass.yaml)
``` ```
kind: StorageClass kubectl apply -f https://raw.githubusercontent.com/rancher/longhorn/v0.3-rc/examples/pvc.yaml
apiVersion: storage.k8s.io/v1
metadata:
name: longhorn
provisioner: rancher.io/longhorn
parameters:
numberOfReplicas: "3"
staleReplicaTimeout: "30"
fromBackup: ""
``` ```
Then user can create a PVC directly. For example: The yaml contains two parts:
1. Create a PVC using Longhorn StorageClass.
``` ```
apiVersion: v1 apiVersion: v1
kind: PersistentVolumeClaim kind: PersistentVolumeClaim
@ -141,7 +201,7 @@ spec:
storage: 2Gi storage: 2Gi
``` ```
Then use it in the pod: 2. Use it in the a Pod as a persistent volume:
``` ```
apiVersion: v1 apiVersion: v1
kind: Pod kind: Pod
@ -163,82 +223,135 @@ spec:
persistentVolumeClaim: persistentVolumeClaim:
claimName: longhorn-volv-pvc claimName: longhorn-volv-pvc
``` ```
More examples are available at `./examples/`
## Setup a TESTING ONLY NFS server for storing backups # Highlight features
### Snapshot
A snapshot in Longhorn represents a volume state at a given time, stored in the same location of volume data on physical disk of the host. Snapshot creation is instant in Longhorn.
Longhorn supports backing up mechanisms to export the user data out of the Longhorn system. Currently Longhorn supports backing up to a NFS server. In order to use this feature, you need to have a NFS server running and accessible in the Kubernetes cluster. Here we provide a simple way to setup a testing NFS server. User can revert to any previous taken snapshot using the UI. Since Longhorn is a distributed block storage, please make sure the Longhorn volume is umounted from the host when revert to any previous snapshot, otherwise it will confuse the node filesystem and cause filesystem corruption.
WARNING: This NFS server won't save any data after you delete it. It's for TESTING ONLY. #### Note about the block level snapshot
Longhorn is a `crash-consistent` block storage solution.
It's normal for the OS to keep content in the cache before writing into the block layer. However, it also means if the all the replicas are down, then the Longhorn may not contains the immediate change before the shutdown, since the content was kept in the OS level cache and hadn't transfered to Longhorn system yet. It's similar to if your desktop was down due to a power outage, after resuming the power, you may find some weird files in the hard drive.
To force the data being written to the block layer at any given moment, the user can run `sync` command on the node manually, or umount the disk. OS would write the content from the cache to the block layer in either situation.
### Backup
A backup in Longhorn represents a volume state at a given time, stored in the secondary storage (backupstore in Longhorn word) which is outside of the Longhorn system. Backup creation will involving copying the data through the network, so it will take time.
A corresponding snapshot is needed for creating a backup. And user can choose to backup any snapshot previous created.
A backupstore is a NFS server or S3 compatible server.
A backup target represents a backupstore in the Longhorn. The backup target can be set at `Settings/General/BackupTarget`
If user is using a S3 compatible server as the backup target, a backup target secret is needed for authentication informations. User need to manually create it as a Kubernetes Secret in the `longhorn-system` namespace. See below for details.
#### Setup a testing backupstore
We provides two testing purpose backupstore based on NFS server and Minio S3 server for testing, in `./deploy/backupstores`.
Use following command to setup a Minio S3 server for BackupStore after `longhorn-system` was created.
``` ```
kubectl create -f deploy/example-backupstore.yaml kubectl apply -f https://raw.githubusercontent.com/rancher/longhorn/v0.3-rc/deploy/backupstores/minio-backupstore.yaml
```
It will create a simple NFS server in the `default` namespace, which can be addressed as `longhorn-test-nfs-svc.default` for other pods in the cluster.
After this script completes, using the following URL as the Backup Target in the Longhorn setting:
```
nfs://longhorn-test-nfs-svc.default:/opt/backupstore
```
Open Longhorn UI, go to Setting, fill the Backup Target field with the URL above, click Save. Now you should able to use the backup feature of Longhorn.
## Google Kubernetes Engine
The configuration yaml will be slight different for Google Kubernetes Engine (GKE):
1. GKE requires user to manually claim himself as cluster admin to enable RBAC. User need to execute following command before create the Longhorn system using yaml files.
```
kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin --user=<name@example.com>
```
In which `name@example.com` is the user's account name in GCE, and it's case sensitive.
See [here](https://cloud.google.com/kubernetes-engine/docs/how-to/role-based-access-control) for details.
2. The default Flexvolume plugin directory is different with GKE 1.8+, which is at `/home/kubernetes/flexvolume`. User need to use following command instead:
```
kubectl create -f https://raw.githubusercontent.com/rancher/longhorn/master/deploy/longhorn-gke.yaml
``` ```
User can also customerize the Flexvolume directory in the last part of the Longhorn system deployment yaml file, e.g.: Now set `Settings/General/BackupTarget` to
``` ```
- name: FLEXVOLUME_DIR s3://backupbucket@us-east-1/backupstore
value: "/home/kubernetes/flexvolume/"
``` ```
And `Setttings/General/BackupTargetSecret` to
```
minio-secret
```
Click the `Backup` tab in the UI, it should report an empty list without error out.
See [Troubleshooting](#troubleshooting) for details. The `minio-secret` yaml looks like this:
```
apiVersion: v1
kind: Secret
metadata:
name: minio-secret
namespace: longhorn-system
type: Opaque
data:
AWS_ACCESS_KEY_ID: bG9uZ2hvcm4tdGVzdC1hY2Nlc3Mta2V5 # longhorn-test-access-key
AWS_SECRET_ACCESS_KEY: bG9uZ2hvcm4tdGVzdC1zZWNyZXQta2V5 # longhorn-test-secret-key
AWS_ENDPOINTS: aHR0cDovL21pbmlvLXNlcnZpY2UuZGVmYXVsdDo5MDAw # http://minio-service.default:9000
```
Notice the secret must be created in the `longhorn-system` namespace for Longhorn to access.
### Recurring snapshot and backup
Longhorn supports recurring snapshot and backup for volumes. User only need to set when he/she wish to take the snapshot and/or backup, and how many snapshots/backups needs to be retains, then Longhorn will automatically create snapshot/backup for the user at that time, as long as the volume is attached to a node.
User can find the setting for the recurring snapshot and backup in the `Volume Detail` page.
## Other features
### [Multiple disks](./docs/multidisk.md)
### [iSCSI](./docs/iscsi.md)
### [Restoring Stateful Set volumes](./docs/restore_statefulset.md)
### [Base image](./docs/base-image.md)
## Additional informations
### [Google Kubernetes Engine](./docs/gke.md)
### [Upgrade from v0.1/v0.2](./docs/upgrade.md)
### [Troubleshooting](./docs/troubleshooting.md)
## Uninstall Longhorn ## Uninstall Longhorn
In order to uninstall Longhorn, user need to remove all the volumes first: Longhorn store its data in the Kubernetes API server, in the format of CRD. Longhorn CRD has the finalizers in them, so user should delete the volumes and related resource first, give the managers a chance to do the clean up after them.
### 1. Clean up volume and related resources
Noted that you would lose all you data after done this. It's recommended to make backups before proceeding if you intent to keep the data.
``` ```
kubectl -n longhorn-system delete lhv --all kubectl -n longhorn-system delete volumes.longhorn.rancher.io --all
``` ```
After confirming all the volumes are removed, then Longhorn can be easily uninstalled using: Check the result using:
``` ```
kubectl delete -f https://raw.githubusercontent.com/rancher/longhorn/master/deploy/longhorn.yaml kubectl -n longhorn-system get volumes.longhorn.rancher.io
kubectl -n longhorn-system get engines.longhorn.rancher.io
kubectl -n longhorn-system get replicas.longhorn.rancher.io
``` ```
## Troubleshooting Make sure all reports `No resources found.` before continuing.
### Volume can be attached/detached from UI, but Kubernetes Pod/StatefulSet etc cannot use it ### 2. Clean up engine images and nodes
Check if volume plugin directory has been set correctly. ```
kubectl -n longhorn-system delete engineimages.longhorn.rancher.io --all
kubectl -n longhorn-system delete nodes.longhorn.rancher.io --all
By default, Kubernetes use `/usr/libexec/kubernetes/kubelet-plugins/volume/exec/` as the directory for volume plugin drivers, as stated in the [official document](https://github.com/kubernetes/community/blob/master/contributors/devel/flexvolume.md#prerequisites). ```
But some vendors may choose to change the directory due to various reasons. For example, GKE uses `/home/kubernetes/flexvolume`, and RKE uses `/var/lib/kubelet/volumeplugins`. Check the result using:
User can find the correct directory by running `ps aux|grep kubelet` on the host and check the `--volume-plugin-dir` parameter. If there is none, the default `/usr/libexec/kubernetes/kubelet-plugins/volume/exec/` will be used. ```
kubectl -n longhorn-system get engineimages.longhorn.rancher.io
kubectl -n longhorn-system get nodes.longhorn.rancher.io
```
Make sure all reports `No resources found.` before continuing.
### 3. Uninstall Longhorn
```
kubectl delete -f https://raw.githubusercontent.com/rancher/longhorn/v0.3-rc/deploy/longhorn.yaml
```
## License ## License
Copyright (c) 2014-2018 [Rancher Labs, Inc.](http://rancher.com)
Licensed under the Apache License, Version 2.0 (the "License"); Copyright (c) 2014-2018 [Rancher Labs, Inc.](http://rancher.com/)
you may not use this file except in compliance with the License.
You may obtain a copy of the License at Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
[http://www.apache.org/licenses/LICENSE-2.0](http://www.apache.org/licenses/LICENSE-2.0) [http://www.apache.org/licenses/LICENSE-2.0](http://www.apache.org/licenses/LICENSE-2.0)
Unless required by applicable law or agreed to in writing, software Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -0,0 +1,65 @@
apiVersion: v1
kind: Secret
metadata:
name: minio-secret
type: Opaque
data:
AWS_ACCESS_KEY_ID: bG9uZ2hvcm4tdGVzdC1hY2Nlc3Mta2V5 # longhorn-test-access-key
AWS_SECRET_ACCESS_KEY: bG9uZ2hvcm4tdGVzdC1zZWNyZXQta2V5 # longhorn-test-secret-key
AWS_ENDPOINTS: aHR0cDovL21pbmlvLXNlcnZpY2UuZGVmYXVsdDo5MDAw # http://minio-service.default:9000
---
# same secret for longhorn-system namespace
apiVersion: v1
kind: Secret
metadata:
name: minio-secret
namespace: longhorn-system
type: Opaque
data:
AWS_ACCESS_KEY_ID: bG9uZ2hvcm4tdGVzdC1hY2Nlc3Mta2V5 # longhorn-test-access-key
AWS_SECRET_ACCESS_KEY: bG9uZ2hvcm4tdGVzdC1zZWNyZXQta2V5 # longhorn-test-secret-key
AWS_ENDPOINTS: aHR0cDovL21pbmlvLXNlcnZpY2UuZGVmYXVsdDo5MDAw # http://minio-service.default:9000
---
apiVersion: v1
kind: Pod
metadata:
name: longhorn-test-minio
labels:
app: longhorn-test-minio
spec:
volumes:
- name: minio-volume
emptyDir: {}
containers:
- name: minio
image: minio/minio
command: ["sh", "-c", "mkdir -p /storage/backupbucket && exec /usr/bin/minio server /storage"]
env:
- name: MINIO_ACCESS_KEY
valueFrom:
secretKeyRef:
name: minio-secret
key: AWS_ACCESS_KEY_ID
- name: MINIO_SECRET_KEY
valueFrom:
secretKeyRef:
name: minio-secret
key: AWS_SECRET_ACCESS_KEY
ports:
- containerPort: 9000
volumeMounts:
- name: minio-volume
mountPath: "/storage"
---
apiVersion: v1
kind: Service
metadata:
name: minio-service
spec:
selector:
app: longhorn-test-minio
ports:
- port: 9000
targetPort: 9000
protocol: TCP
sessionAffinity: ClientIP

View File

@ -5,6 +5,9 @@ metadata:
labels: labels:
app: longhorn-test-nfs app: longhorn-test-nfs
spec: spec:
volumes:
- name: nfs-volume
emptyDir: {}
containers: containers:
- name: longhorn-test-nfs-container - name: longhorn-test-nfs-container
image: janeczku/nfs-ganesha:latest image: janeczku/nfs-ganesha:latest
@ -16,10 +19,19 @@ spec:
value: /opt/backupstore value: /opt/backupstore
- name: PSEUDO_PATH - name: PSEUDO_PATH
value: /opt/backupstore value: /opt/backupstore
command: ["bash", "-c", "mkdir -p /opt/backupstore && /opt/start_nfs.sh"] command: ["bash", "-c", "chmod 700 /opt/backupstore && /opt/start_nfs.sh | tee /var/log/ganesha.log"]
securityContext: securityContext:
privileged: true
capabilities: capabilities:
add: ["SYS_ADMIN", "DAC_READ_SEARCH"] add: ["SYS_ADMIN", "DAC_READ_SEARCH"]
volumeMounts:
- name: nfs-volume
mountPath: "/opt/backupstore"
livenessProbe:
exec:
command: ["bash", "-c", "grep \"No export entries found\" /var/log/ganesha.log > /dev/null 2>&1 ; [ $? -ne 0 ]"]
initialDelaySeconds: 5
periodSeconds: 5
--- ---
kind: Service kind: Service
apiVersion: v1 apiVersion: v1

View File

@ -1,302 +0,0 @@
apiVersion: v1
kind: Namespace
metadata:
name: longhorn-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: longhorn-service-account
namespace: longhorn-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: longhorn-role
rules:
- apiGroups:
- apiextensions.k8s.io
resources:
- customresourcedefinitions
verbs:
- "*"
- apiGroups: [""]
resources: ["pods", "events", "persistentvolumes", "persistentvolumeclaims", "nodes", "proxy/nodes"]
verbs: ["*"]
- apiGroups: ["extensions"]
resources: ["daemonsets"]
verbs: ["*"]
- apiGroups: ["batch"]
resources: ["jobs", "cronjobs"]
verbs: ["*"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["*"]
- apiGroups: ["longhorn.rancher.io"]
resources: ["nodes"]
verbs: ["*"]
- apiGroups: ["longhorn.rancher.io"]
resources: ["volumes"]
verbs: ["*"]
- apiGroups: ["longhorn.rancher.io"]
resources: ["engines"]
verbs: ["*"]
- apiGroups: ["longhorn.rancher.io"]
resources: ["replicas"]
verbs: ["*"]
- apiGroups: ["longhorn.rancher.io"]
resources: ["settings"]
verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: longhorn-bind
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: longhorn-role
subjects:
- kind: ServiceAccount
name: longhorn-service-account
namespace: longhorn-system
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
labels:
longhorn-manager: Engine
name: engines.longhorn.rancher.io
spec:
group: longhorn.rancher.io
names:
kind: Engine
listKind: EngineList
plural: engines
shortNames:
- lhe
singular: engine
scope: Namespaced
version: v1alpha1
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
labels:
longhorn-manager: Replica
name: replicas.longhorn.rancher.io
spec:
group: longhorn.rancher.io
names:
kind: Replica
listKind: ReplicaList
plural: replicas
shortNames:
- lhr
singular: replica
scope: Namespaced
version: v1alpha1
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
labels:
longhorn-manager: Setting
name: settings.longhorn.rancher.io
spec:
group: longhorn.rancher.io
names:
kind: Setting
listKind: SettingList
plural: settings
shortNames:
- lhs
singular: setting
scope: Namespaced
version: v1alpha1
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
labels:
longhorn-manager: Volume
name: volumes.longhorn.rancher.io
spec:
group: longhorn.rancher.io
names:
kind: Volume
listKind: VolumeList
plural: volumes
shortNames:
- lhv
singular: volume
scope: Namespaced
version: v1alpha1
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
labels:
app: longhorn-manager
name: longhorn-manager
namespace: longhorn-system
spec:
template:
metadata:
labels:
app: longhorn-manager
spec:
initContainers:
- name: init-container
image: rancher/longhorn-engine:de88734
command: ['sh', '-c', 'cp /usr/local/bin/* /data/']
volumeMounts:
- name: execbin
mountPath: /data/
containers:
- name: longhorn-manager
image: rancher/longhorn-manager:010fe60
imagePullPolicy: Always
securityContext:
privileged: true
command:
- longhorn-manager
- -d
- daemon
- --engine-image
- rancher/longhorn-engine:de88734
- --manager-image
- rancher/longhorn-manager:010fe60
- --service-account
- longhorn-service-account
ports:
- containerPort: 9500
volumeMounts:
- name: dev
mountPath: /host/dev/
- name: proc
mountPath: /host/proc/
- name: varrun
mountPath: /var/run/
- name: longhorn
mountPath: /var/lib/rancher/longhorn/
- name: execbin
mountPath: /usr/local/bin/
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
volumes:
- name: dev
hostPath:
path: /dev/
- name: proc
hostPath:
path: /proc/
- name: varrun
hostPath:
path: /var/run/
- name: longhorn
hostPath:
path: /var/lib/rancher/longhorn/
- name: execbin
emptyDir: {}
serviceAccountName: longhorn-service-account
---
kind: Service
apiVersion: v1
metadata:
labels:
app: longhorn-manager
name: longhorn-backend
namespace: longhorn-system
spec:
selector:
app: longhorn-manager
ports:
- port: 9500
targetPort: 9500
sessionAffinity: ClientIP
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: longhorn-ui
name: longhorn-ui
namespace: longhorn-system
spec:
replicas: 1
template:
metadata:
labels:
app: longhorn-ui
spec:
containers:
- name: longhorn-ui
image: rancher/longhorn-ui:1455f4f
ports:
- containerPort: 8000
env:
- name: LONGHORN_MANAGER_IP
value: "http://longhorn-backend:9500"
---
kind: Service
apiVersion: v1
metadata:
labels:
app: longhorn-ui
name: longhorn-frontend
namespace: longhorn-system
spec:
selector:
app: longhorn-ui
ports:
- port: 80
targetPort: 8000
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: longhorn-flexvolume-driver-deployer
namespace: longhorn-system
spec:
replicas: 1
template:
metadata:
labels:
app: longhorn-flexvolume-driver-deployer
spec:
containers:
- name: longhorn-flexvolume-driver-deployer
image: rancher/longhorn-manager:010fe60
imagePullPolicy: Always
command:
- longhorn-manager
- -d
- deploy-flexvolume-driver
- --manager-image
- rancher/longhorn-manager:010fe60
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: FLEXVOLUME_DIR
value: "/home/kubernetes/flexvolume/"
serviceAccountName: longhorn-service-account
---

View File

@ -21,31 +21,22 @@ rules:
verbs: verbs:
- "*" - "*"
- apiGroups: [""] - apiGroups: [""]
resources: ["pods", "events", "persistentvolumes", "persistentvolumeclaims", "nodes", "proxy/nodes"] resources: ["pods", "events", "persistentvolumes", "persistentvolumeclaims", "nodes", "proxy/nodes", "pods/log", "secrets", "services"]
verbs: ["*"] verbs: ["*"]
- apiGroups: ["extensions"] - apiGroups: [""]
resources: ["daemonsets"] resources: ["namespaces"]
verbs: ["get", "list"]
- apiGroups: ["apps"]
resources: ["daemonsets", "statefulsets"]
verbs: ["*"] verbs: ["*"]
- apiGroups: ["batch"] - apiGroups: ["batch"]
resources: ["jobs", "cronjobs"] resources: ["jobs", "cronjobs"]
verbs: ["*"] verbs: ["*"]
- apiGroups: ["storage.k8s.io"] - apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"] resources: ["storageclasses", "volumeattachments"]
verbs: ["*"] verbs: ["*"]
- apiGroups: ["longhorn.rancher.io"] - apiGroups: ["longhorn.rancher.io"]
resources: ["nodes"] resources: ["volumes", "engines", "replicas", "settings", "engineimages", "nodes"]
verbs: ["*"]
- apiGroups: ["longhorn.rancher.io"]
resources: ["volumes"]
verbs: ["*"]
- apiGroups: ["longhorn.rancher.io"]
resources: ["engines"]
verbs: ["*"]
- apiGroups: ["longhorn.rancher.io"]
resources: ["replicas"]
verbs: ["*"]
- apiGroups: ["longhorn.rancher.io"]
resources: ["settings"]
verbs: ["*"] verbs: ["*"]
--- ---
apiVersion: rbac.authorization.k8s.io/v1beta1 apiVersion: rbac.authorization.k8s.io/v1beta1
@ -133,7 +124,43 @@ spec:
scope: Namespaced scope: Namespaced
version: v1alpha1 version: v1alpha1
--- ---
apiVersion: extensions/v1beta1 apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
labels:
longhorn-manager: EngineImage
name: engineimages.longhorn.rancher.io
spec:
group: longhorn.rancher.io
names:
kind: EngineImage
listKind: EngineImageList
plural: engineimages
shortNames:
- lhei
singular: engineimage
scope: Namespaced
version: v1alpha1
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
labels:
longhorn-manager: Node
name: nodes.longhorn.rancher.io
spec:
group: longhorn.rancher.io
names:
kind: Node
listKind: NodeList
plural: nodes
shortNames:
- lhn
singular: node
scope: Namespaced
version: v1alpha1
---
apiVersion: apps/v1beta2
kind: DaemonSet kind: DaemonSet
metadata: metadata:
labels: labels:
@ -141,21 +168,17 @@ metadata:
name: longhorn-manager name: longhorn-manager
namespace: longhorn-system namespace: longhorn-system
spec: spec:
selector:
matchLabels:
app: longhorn-manager
template: template:
metadata: metadata:
labels: labels:
app: longhorn-manager app: longhorn-manager
spec: spec:
initContainers:
- name: init-container
image: rancher/longhorn-engine:de88734
command: ['sh', '-c', 'cp /usr/local/bin/* /data/']
volumeMounts:
- name: execbin
mountPath: /data/
containers: containers:
- name: longhorn-manager - name: longhorn-manager
image: rancher/longhorn-manager:010fe60 image: rancher/longhorn-manager:v0.3.0
imagePullPolicy: Always imagePullPolicy: Always
securityContext: securityContext:
privileged: true privileged: true
@ -164,9 +187,9 @@ spec:
- -d - -d
- daemon - daemon
- --engine-image - --engine-image
- rancher/longhorn-engine:de88734 - rancher/longhorn-engine:v0.3.0
- --manager-image - --manager-image
- rancher/longhorn-manager:010fe60 - rancher/longhorn-manager:v0.3.0
- --service-account - --service-account
- longhorn-service-account - longhorn-service-account
ports: ports:
@ -180,8 +203,7 @@ spec:
mountPath: /var/run/ mountPath: /var/run/
- name: longhorn - name: longhorn
mountPath: /var/lib/rancher/longhorn/ mountPath: /var/lib/rancher/longhorn/
- name: execbin mountPropagation: Bidirectional
mountPath: /usr/local/bin/
env: env:
- name: POD_NAMESPACE - name: POD_NAMESPACE
valueFrom: valueFrom:
@ -208,8 +230,6 @@ spec:
- name: longhorn - name: longhorn
hostPath: hostPath:
path: /var/lib/rancher/longhorn/ path: /var/lib/rancher/longhorn/
- name: execbin
emptyDir: {}
serviceAccountName: longhorn-service-account serviceAccountName: longhorn-service-account
--- ---
kind: Service kind: Service
@ -227,7 +247,7 @@ spec:
targetPort: 9500 targetPort: 9500
sessionAffinity: ClientIP sessionAffinity: ClientIP
--- ---
apiVersion: extensions/v1beta1 apiVersion: apps/v1beta2
kind: Deployment kind: Deployment
metadata: metadata:
labels: labels:
@ -236,6 +256,9 @@ metadata:
namespace: longhorn-system namespace: longhorn-system
spec: spec:
replicas: 1 replicas: 1
selector:
matchLabels:
app: longhorn-ui
template: template:
metadata: metadata:
labels: labels:
@ -243,7 +266,7 @@ spec:
spec: spec:
containers: containers:
- name: longhorn-ui - name: longhorn-ui
image: rancher/longhorn-ui:1455f4f image: rancher/longhorn-ui:v0.3.0
ports: ports:
- containerPort: 8000 - containerPort: 8000
env: env:
@ -265,28 +288,40 @@ spec:
targetPort: 8000 targetPort: 8000
type: LoadBalancer type: LoadBalancer
--- ---
apiVersion: extensions/v1beta1 apiVersion: apps/v1beta2
kind: Deployment kind: Deployment
metadata: metadata:
name: longhorn-flexvolume-driver-deployer name: longhorn-driver-deployer
namespace: longhorn-system namespace: longhorn-system
spec: spec:
replicas: 1 replicas: 1
selector:
matchLabels:
app: longhorn-driver-deployer
template: template:
metadata: metadata:
labels: labels:
app: longhorn-flexvolume-driver-deployer app: longhorn-driver-deployer
spec: spec:
initContainers:
- name: wait-longhorn-manager
image: rancher/longhorn-manager:v0.3.0
command: ['sh', '-c', 'while [ $(curl -m 1 -s -o /dev/null -w "%{http_code}" http://longhorn-backend:9500/v1) != "200" ]; do echo waiting; sleep 2; done']
containers: containers:
- name: longhorn-flexvolume-driver-deployer - name: longhorn-driver-deployer
image: rancher/longhorn-manager:010fe60 image: rancher/longhorn-manager:v0.3.0
imagePullPolicy: Always imagePullPolicy: Always
command: command:
- longhorn-manager - longhorn-manager
- -d - -d
- deploy-flexvolume-driver - deploy-driver
- --manager-image - --manager-image
- rancher/longhorn-manager:010fe60 - rancher/longhorn-manager:v0.3.0
- --manager-url
- http://longhorn-backend:9500/v1
# manually choose "flexvolume" or "csi"
#- --driver
#- flexvolume
env: env:
- name: POD_NAMESPACE - name: POD_NAMESPACE
valueFrom: valueFrom:
@ -296,9 +331,17 @@ spec:
valueFrom: valueFrom:
fieldRef: fieldRef:
fieldPath: spec.nodeName fieldPath: spec.nodeName
- name: SERVICE_ACCOUNT
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
- name: FLEXVOLUME_DIR - name: FLEXVOLUME_DIR
value: "" value: "/var/lib/kubelet/volumeplugins"
# FOR RKE
#value: "/var/lib/kubelet/volumeplugins"
# FOR GKE # FOR GKE
#value: "/home/kubernetes/flexvolume/" #value: "/home/kubernetes/flexvolume/"
# For default or auto detection with Kubernetes <= v1.8
#value: ""
serviceAccountName: longhorn-service-account serviceAccountName: longhorn-service-account
--- ---

250
docs/base-image.md Normal file
View File

@ -0,0 +1,250 @@
# Base Image Support
Longhorn supports creation of block devices backed by a base image. Longhorn
base images are packaged as Docker images. Public or private registries may
be used as a distribution mechanism for your Docker base images.
## Usage
Volumes backed by a base image can be created in three ways.
1. [UI](#ui) - Create Longhorn volumes exposed as block device or iSCSI target
2. [FlexVolume Driver](#flexvolume-driver) - Create Longhorn block devices and consume in Kubernetes pods
3. [CSI Driver](#csi-driver) - (Newer) Create Longhorn block devices and consume in Kubernetes pods
### UI
On the `Volume` tab, click the `Create Volume` button. The `Base Image` field
expects a Docker image name such as `rancher/vm-ubuntu:16.04.4-server-amd64`.
### FlexVolume Driver
The flexvolume driver supports volumes backed by base image. Below is a sample
FlexVolume definition including `baseImage` option.
```
name: flexvol
flexVolume:
driver: "rancher.io/longhorn"
fsType: "ext4"
options:
size: "32Mi"
numberOfReplicas: "3"
staleReplicaTimeout: "20"
fromBackup: ""
baseImage: "rancher/longhorn-test:baseimage-ext4"
```
You do not need to (and probably shouldn't) explicitly set filesystem type
`fsType` when base image is present. If you do, it must match the base image's
filesystem or the flexvolume driver will return an error.
Try it out for yourself. Make sure the Longhorn driver deployer specifies flag
`--driver flexvolume`, otherwise a different driver may be deployed. The
following example creates an nginx pod serving content from a flexvolume with
a base image and is accessible from a service.
```
kubectl create -f https://raw.githubusercontent.com/rancher/longhorn-manager/v0.3-rc/examples/flexvolume/example_baseimage.yaml
```
Wait until the pod is running.
```
kubectl get po/flexvol-baseimage -w
```
Query for the service you created.
```
kubectl get svc/flexvol-baseimage
```
Your service should look similar.
```
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/flexvol-baseimage LoadBalancer 10.43.153.186 <pending> 80:31028/TCP 2m
```
Now let's access something packaged inside the base image through the Nginx
webserver, exposed by the `LoadBalancer` service. If you have LoadBalancer
support and `EXTERNAL-IP` is set, navigate to the following URL.
```
http://<EXTERNAL-IP>/guests/hd/party-wizard.gif
```
Otherwise, navigate to the following URL where `NODE-IP` is the external IP
address of any Kubernetes node and `NODE-PORT` is the second port in the
service (`31028` in the example service above).
```
http://<NODE-IP>:<NODE-PORT>/guests/hd/party-wizard.gif
```
Finally, tear down the pod and service.
```
kubectl delete -f https://raw.githubusercontent.com/rancher/longhorn-manager/v0.3-rc/examples/flexvolume/example_baseimage.yaml
```
### CSI Driver
The CSI driver supports volumes backed by base image. Below is a sample
StorageClass definition including `baseImage` option.
```
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: example
provisioner: rancher.io/longhorn
parameters:
numberOfReplicas: '3'
staleReplicaTimeout: '30'
fromBackup: ''
baseImage: rancher/longhorn-test:baseimage-ext4
```
Let's walk through an example. First, ensure the CSI Plugin is deployed.
```
kubectl -n longhorn-system get daemonset.apps/longhorn-csi-plugin
```
The following example creates an nginx statefulset with two replicas serving
content from two csi-provisioned volumes backed by a base image. The
statefulset is accessible from a service.
```
kubectl create -f https://raw.githubusercontent.com/rancher/longhorn-manager/v0.3-rc/examples/provisioner_with_baseimage.yaml
```
Wait until both pods are running.
```
kubectl -l app=provisioner-baseimage get po -w
```
Query for the service you created.
```
kubectl get svc/csi-baseimage
```
Your service should look similar.
```
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
csi-baseimage LoadBalancer 10.43.47.129 <pending> 80:32768/TCP 4m
```
Now let's access something packaged inside the base image through the Nginx
webserver, exposed by the `LoadBalancer` service. If you have LoadBalancer
support and `EXTERNAL-IP` is set, navigate to the following URL.
```
http://<EXTERNAL-IP>/guests/hd/party-wizard.gif
```
Otherwise, navigate to the following URL where `NODE-IP` is the external IP
address of any Kubernetes node and `NODE-PORT` is the second port in the
service (`32768` in the example service above).
```
http://<NODE-IP>:<NODE-PORT>/guests/hd/party-wizard.gif
```
Finally, tear down the pod and service.
```
kubectl delete -f https://raw.githubusercontent.com/rancher/longhorn-manager/v0.3-rc/examples/provisioner_with_baseimage.yaml
```
## Building
Creating and packaging an empty base image is a very simple process.
1. [Install QEMU](https://en.wikibooks.org/wiki/QEMU/Installing_QEMU).
2. Create a qcow2 image.
```
qemu-img create -f qcow2 example.qcow2 4G
```
3. Create the `Dockerfile` file with the following contents:
```
FROM busybox
COPY example.qcow2 /base_image/example.qcow2
```
4. Build and publish the image:
```
DOCKERHUB_ACCT=rancher
docker build -t ${DOCKERHUB_ACCT}/longhorn-example:baseimage .
docker push ${DOCKERHUB_ACCT}/longhorn-example:baseimage
```
That's it! Your (empty) base image is ready for (no) use. Let's now explore
some use cases for a base image and what we should do to our `example.qcow2`
before building and publishing.
### Simple Filesystem
Suppose we want to store some static web assets in a volume. We have our qcow2
image and the web assets, but how to put the assets in the image?
On a Linux machine, load the network block device module.
```
sudo modprobe nbd
```
Use `qemu-nbd` to expose the image as a network block device.
```
sudo qemu-nbd -f qcow2 -c /dev/nbd0 example.qcow2
```
The raw block device needs a filesystem. Consider your infrastructure and
choose an appropriate filesystem. We will use EXT4 filesystem.
```
sudo mkfs -t ext4 /dev/nbd0
```
Mount the filesystem.
```
mkdir -p example
sudo mount /dev/nbd0 example
```
Copy web assets to filesystem.
```
cp /web/assets/* example/
```
Unmount the filesystem, shutdown `qemu-nbd`, cleanup.
```
sudo umount example
sudo killall qemu-nbd
rmdir example
```
Optionally, compress the image.
```
qemu-img convert -c -O qcow2 example.qcow2 example.compressed.qcow2
```
Follow the build and publish image steps and you are done. [Example script](https://raw.githubusercontent.com/rancher/longhorn-tests/master/manager/test_containers/baseimage/generate.sh).
### Virtual Machine
See [this document](https://github.com/rancher/vm/blob/master/docs/images.md) for the basic procedure of preparing Virtual Machine images.

25
docs/gke.md Normal file
View File

@ -0,0 +1,25 @@
# Google Kubernetes Engine
The user must uses `Ubuntu` as the OS on the node, instead of `Container-Optimized OS(default)`, since the latter doesn't support `open-iscsi` which is required by Longhorn.
The configuration yaml will be slight different for Google Kubernetes Engine (GKE):
1. GKE requires user to manually claim himself as cluster admin to enable RBAC. User need to execute following command before create the Longhorn system using yaml files.
```
kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin --user=<name@example.com>
```
In which `name@example.com` is the user's account name in GCE, and it's case sensitive. See [here](https://cloud.google.com/kubernetes-engine/docs/how-to/role-based-access-control) for details.
2. The default Flexvolume plugin directory is different with GKE 1.8+, which is at `/home/kubernetes/flexvolume`. User need to use following command instead:
```
FLEXVOLUME_DIR="/home/kubernetes/flexvolume/"
curl -s https://raw.githubusercontent.com/rancher/longhorn/v0.3-rc/deploy/longhorn.yaml|sed "s#^\( *\)value: \"/var/lib/kubelet/volumeplugins\"#\1value: \"${FLEXVOLUME_DIR}\"#g" > longhorn.yaml
kubectl create -f longhorn.yaml
```
See [Troubleshooting](./troubleshooting.md) for details.

24
docs/iscsi.md Normal file
View File

@ -0,0 +1,24 @@
# iSCSI support
Longhorn supports iSCSI target frontend mode. The user can connect to it
through any iSCSI client, including open-iscsi, and virtual machine
hypervisor like KVM, as long as it's in the same network with the Longhorn system.
Longhorn Driver (CSI/Flexvolume) doesn't support iSCSI mode.
To start volume with iSCSI target frontend mode, select `iSCSI` as the frontend
when creating the volume. After volume has been attached, the user will see
something like following in the `endpoint` field:
```
iscsi://10.42.0.21:3260/iqn.2014-09.com.rancher:testvolume/1
```
Here:
1. The IP and port is `10.42.0.21:3260`.
2. The target name is `iqn.2014-09.com.rancher:testvolume`. `testvolume` is the
name of the volume.
3. The LUN number is 1. Longhorn always uses LUN 1.
Then user can use above information to connect to the iSCSI target provided by
Longhorn using an iSCSI client.

28
docs/multidisk.md Normal file
View File

@ -0,0 +1,28 @@
# Multiple disks support
Longhorn supports to use more than one disk on the nodes to store the volume data.
To add a new disk for a node, heading to `Node` tab, select one of the node, and click the edit disk icon.
By default, `/var/lib/rancher/longhorn` on the host will be used for storing the volume data.
To add any additional disks, user needs to:
1. Mount the disk on the host to a certain directory.
2. Add the path of the mounted disk into the disk list of the node.
Longhorn will detect the storage information (e.g. maximum space, available space) about the disk automatically, and start scheduling to it if it's possible to accomodate the volume in there. A path mounted by the existing disk won't be allowed.
User can reserve a certain amount of space of the disk to stop Longhorn from using it. It can be set in the `Space Reserved` field for the disk. It's useful for the non-dedicated storage disk on the node.
The kubelet needs to preserve node stability when available compute resources are low. This is especially important when dealing with incompressible compute resources, such as memory or disk space. If such resources are exhausted, nodes become unstable. To avoid kubelet `Disk pressure` issue after scheduling several volumes, by default, longhorn reserved 30% of root disk space (`/var/lib/rancher/longhorn`) to ensure node stability.
Nodes and disks can be excluded from future scheduling. Notice any scheduled storage space won't be released automatically if the scheduling was disabled for the node.
There are two global settings affect the scheduling of the volume as well.
`StorageOverProvisioningPercentage` defines the upper bound of `ScheduledStorage / (MaximumStorage - ReservedStorage)` . The default value is `500` (%). That means we can schedule a total of 750 GiB Longhorn volumes on a 200 GiB disk with 50G reserved for the root file system. Because normally people won't use that large amount of data in the volume, and we store the volumes as sparse files.
`StorageMinimalAvailablePercentage` defines when a disk cannot be scheduled with more volumes. The default value is `10` (%). The bigger value between `MaximumStorage * StorageMinimalAvailablePercentage / 100` and `MaximumStorage - ReservedStorage` will be used to determine if a disk is running low and cannot be scheduled with more volumes.
Notice currently there is no guarantee that the space volumes used won't exceed the `StorageMinimalAvailablePercentage`, because:
1. Longhorn volume can be bigger than specified size, due to the snapshot contains the old state of the volume
2. And Longhorn is doing over-provisioning by default.

221
docs/restore_statefulset.md Normal file
View File

@ -0,0 +1,221 @@
# Restoring Volumes for Kubernetes Stateful Sets
Longhorn supports restoring backups, and one of the use cases for this feature
is to restore data for use in a Kubernetes `Stateful Set`, which requires
restoring a volume for each replica that was backed up.
To restore, follow the below instructions based on which plugin you have
deployed. The example below uses a Stateful Set with one volume attached to
each Pod and two replicas.
- [CSI Instructions](#csi-instructions)
- [FlexVolume Instructions](#flexvolume-instructions)
### CSI Instructions
1. Connect to the `Longhorn UI` page in your web browser. Under the `Backup` tab,
select the name of the Stateful Set volume. Click the dropdown menu of the
volume entry and restore it. Name the volume something that can easily be
referenced later for the `Persistent Volumes`.
- Repeat this step for each volume you need restored.
- For example, if restoring a Stateful Set with two replicas that had
volumes named `pvc-01a` and `pvc-02b`, the restore could look like this:
| Backup Name | Restored Volume |
|-------------|-------------------|
| pvc-01a | statefulset-vol-0 |
| pvc-02b | statefulset-vol-1 |
2. In Kubernetes, create a `Persistent Volume` for each Longhorn volume that was
created. Name the volumes something that can easily be referenced later for the
`Persistent Volume Claims`. `storage` capacity, `numberOfReplicas`,
`storageClassName`, and `volumeHandle` must be replaced below. In the example,
we're referencing `statefulset-vol-0` and `statefulset-vol-1` in Longhorn and
using `longhorn` as our `storageClassName`.
```yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: statefulset-vol-0
spec:
capacity:
storage: <size> # must match size of Longhorn volume
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
csi:
driver: io.rancher.longhorn # driver must match this
fsType: ext4
volumeAttributes:
numberOfReplicas: <replicas> # must match Longhorn volume value
staleReplicaTimeout: '30'
volumeHandle: statefulset-vol-0 # must match volume name from Longhorn
storageClassName: longhorn # must be same name that we will use later
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: statefulset-vol-1
spec:
capacity:
storage: <size> # must match size of Longhorn volume
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
csi:
driver: io.rancher.longhorn # driver must match this
fsType: ext4
volumeAttributes:
numberOfReplicas: <replicas> # must match Longhorn volume value
staleReplicaTimeout: '30'
volumeHandle: statefulset-vol-1 # must match volume name from Longhorn
storageClassName: longhorn # must be same name that we will use later
```
3. Go to [General Instructions](#general-instructions).
### FlexVolume Instructions
Because of the implementation of `FlexVolume`, creating the Longhorn volumes
from the `Longhorn UI` manually can be skipped. Instead, follow these
instructions:
1. Connect to the `Longhorn UI` page in your web browser. Under the `Backup` tab,
select the name of the `Stateful Set` volume. Click the dropdown menu of the
volume entry and select `Get URL`.
- Repeat this step for each volume you need restored. Save these URLs for the
next step.
- If using NFS backups, the URL will appear similar to:
- `nfs://longhorn-nfs-svc.default:/opt/backupstore?backup=backup-c57844b68923408f&volume=pvc-59b20247-99bf-11e8-8a92-be8835d7412a`.
- If using S3 backups, the URL will appear similar to:
- `s3://backupbucket@us-east-1/backupstore?backup=backup-1713a64cd2774c43&volume=longhorn-testvol-g1n1de`
2. Similar to `Step 2` for CSI, create a `Persistent Volume` for each volume you
want to restore. `storage` capacity, `storageClassName`, and the FlexVolume
`options` must be replaced. This example uses `longhorn` as the
`storageClassName`.
```yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: statefulset-vol-0
spec:
capacity:
storage: <size> # must match "size" parameter below
accessModes:
- ReadWriteOnce
storageClassName: longhorn # must be same name that we will use later
flexVolume:
driver: "rancher.io/longhorn" # driver must match this
fsType: "ext4"
options:
size: <size> # must match "storage" parameter above
numberOfReplicas: <replicas>
staleReplicaTimeout: <timeout>
fromBackup: <backup URL> # must be set to Longhorn backup URL
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: statefulset-vol-1
spec:
capacity:
storage: <size> # must match "size" parameter below
accessModes:
- ReadWriteOnce
storageClassName: longhorn # must be same name that we will use later
flexVolume:
driver: "rancher.io/longhorn" # driver must match this
fsType: "ext4"
options:
size: <size> # must match "storage" parameter above
numberOfReplicas: <replicas>
staleReplicaTimeout: <timeout>
fromBackup: <backup URL> # must be set to Longhorn backup URL
```
3. Go to [General Instructions](#general_instructions).
### General Instructions
**Make sure you have followed either the [CSI](#csi-instructions) or
[FlexVolume](#flexvolume-instructions) instructions before following the steps
in this section.**
1. In the `namespace` the `Stateful Set` will be deployed in, create Persistent
Volume Claims **for each** `Persistent Volume`.
- The name of the `Persistent Volume Claim` must follow this naming scheme:
`<name of Volume Claim Template>-<name of Stateful Set>-<index>`. Stateful
Set Pods are zero-indexed. In this example, the name of the `Volume Claim
Template` is `data`, the name of the `Stateful Set` is `webapp`, and there
are two replicas, which are indexes `0` and `1`.
```yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data-webapp-0
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi # must match size from earlier
storageClassName: longhorn # must match name from earlier
volumeName: statefulset-vol-0 # must reference Persistent Volume
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data-webapp-1
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi # must match size from earlier
storageClassName: longhorn # must match name from earlier
volumeName: statefulset-vol-1 # must reference Persistent Volume
```
2. Create the `Stateful Set`:
```yaml
apiVersion: apps/v1beta2
kind: StatefulSet
metadata:
name: webapp # match this with the pvc naming scheme
spec:
selector:
matchLabels:
app: nginx # has to match .spec.template.metadata.labels
serviceName: "nginx"
replicas: 2 # by default is 1
template:
metadata:
labels:
app: nginx # has to match .spec.selector.matchLabels
spec:
terminationGracePeriodSeconds: 10
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
volumeMounts:
- name: data
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: data # match this with the pvc naming scheme
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: longhorn # must match name from earlier
resources:
requests:
storage: 2Gi # must match size from earlier
```
The restored data should now be accessible from inside the `Stateful Set`
`Pods`.

57
docs/troubleshooting.md Normal file
View File

@ -0,0 +1,57 @@
# Troubleshooting
## Common issues
### Volume can be attached/detached from UI, but Kubernetes Pod/StatefulSet etc cannot use it
Check if volume plugin directory has been set correctly.
By default, Kubernetes use `/usr/libexec/kubernetes/kubelet-plugins/volume/exec/` as the directory for volume plugin drivers, as stated in the [official document](https://github.com/kubernetes/community/blob/master/contributors/devel/flexvolume.md#prerequisites).
But some vendors may choose to change the directory due to various reasons. For example, GKE uses `/home/kubernetes/flexvolume`, and RKE uses `/var/lib/kubelet/volumeplugins`.
User can find the correct directory by running `ps aux|grep kubelet` on the host and check the `--volume-plugin-dir`parameter. If there is none, the default `/usr/libexec/kubernetes/kubelet-plugins/volume/exec/` will be used.
User can also use the [environment check script](../README.md#environment-check-script) for this purpose.
## Troubleshooting guide
There are a few compontents in the Longhorn. Manager, Engine, Driver and UI. All of those components runnings as pods in the `longhorn-system` namespace by default inside the Kubernetes cluster.
### UI
Make use of the Longhorn UI is a good start for the troubleshooting. For example, if Kubernetes cannot mount one volume correctly, after stop the workload, try to attach and mount that volume manually on one node and access the content to check if volume is intact.
Also, the event logs in the UI dashboard provides some information of probably issues. Check for the event logs in `Warning` level.
### Manager and engines
You can get the log from Longhorn Manager and Engines to help with the troubleshooting. The most useful logs are from `longhorn-manager-xxx`, and the log inside Longhorn Engine, e.g. `<volname>-e-xxxx` and `<volname>-r-xxxx`.
Since normally there are multiple Longhorn Manager running at the same time, we recommend using [kubetail](https://github.com/johanhaleby/kubetail) which is a great tool to keep track of the logs of multiple pods. You can use:
```
kubetail longhorn-system -n longhorn-system
```
To track the manager logs in real time.
### CSI driver
For CSI driver, check the logs for `csi-attacher-0` and `csi-provisioner-0`, as well as containers in `longhorn-csi-plugin-xxx`.
### Flexvolume driver
For Flexvolume driver, first check where the driver has been installed on the node. Check the log of `longhorn-driver-deployer-xxxx` for that information.
Then check the kubelet logs. Flexvolume driver itself doesn't run inside the container. It would run along with the kubelet process.
If kubelet is running natively on the node, you can use the following command to get the log:
```
journalctl -u kubelet
```
Or if kubelet is running as a container (e.g. in RKE), use the following command instead:
```
docker logs kubelet
```
For even more detail logs of Longhorn Flexvolume, run following command on the node or inside the container (if kubelet is running as a container, e.g. in RKE):
```
touch /var/log/longhorn_driver.log
```

198
docs/upgrade.md Normal file
View File

@ -0,0 +1,198 @@
# Upgrade
Here we cover how to upgrade to Longhorn v0.3 from all previous releases.
## Backup Existing Volumes
It's recommended to create a recent backup of every volume to the backupstore
before upgrade.
If you don't have a on-cluster backupstore already, create one. Here we'll use NFS for example.
1. Execute following command to create the backupstore
```
kubectl apply -f https://raw.githubusercontent.com/rancher/longhorn/v0.3-rc/deploy/backupstores/nfs-backupstore.yaml
```
2. On Longhorn UI Settings page, set Backup Target to
`nfs://longhorn-test-nfs-svc.default:/opt/backupstore` and click `Save`.
Navigate to each volume detail page and click `Take Snapshot` (it's recommended to run `sync` in the host command line before `Take Snapshot`). Click the new
snapshot and click `Backup`. Wait for the new backup to show up in the volume's backup list before continuing.
## Check For Issues
Make sure no volume is in degraded or faulted state. Wait for degraded
volumes to heal and delete/salvage faulted volumes before proceeding.
## Detach Volumes
Shutdown all Kubernetes Pods using Longhorn volumes in order to detach the
volumes. The easiest way to achieve this is by deleting all workloads and recreate them later after upgrade. If
this is not desirable, some workloads may be suspended. We will cover how
each workload can be modified to shut down its pods.
### Deployment
Edit the deployment with `kubectl edit deploy/<name>`.
Set `.spec.replicas` to `0`.
### StatefulSet
Edit the statefulset with `kubectl edit statefulset/<name>`.
Set `.spec.replicas` to `0`.
### DaemonSet
There is no way to suspend this workload.
Delete the daemonset with `kubectl delete ds/<name>`.
### Pod
Delete the pod with `kubectl delete pod/<name>`.
There is no way to suspend a pod not managed by a workload controller.
### CronJob
Edit the cronjob with `kubectl edit cronjob/<name>`.
Set `.spec.suspend` to `true`.
Wait for any currently executing jobs to complete, or terminate them by
deleting relevant pods.
### Job
Consider allowing the single-run job to complete.
Otherwise, delete the job with `kubectl delete job/<name>`.
### ReplicaSet
Edit the replicaset with `kubectl edit replicaset/<name>`.
Set `.spec.replicas` to `0`.
### ReplicationController
Edit the replicationcontroller with `kubectl edit rc/<name>`.
Set `.spec.replicas` to `0`.
Wait for the volumes using by the Kubernetes to complete detaching.
Then detach all remaining volumes from Longhorn UI. These volumes were most likely
created and attached outside of Kubernetes via Longhorn UI or REST API.
## Uninstall the Old Version of Longhorn
Make note of `BackupTarget` on the `Setting` page. You will need to manually
set `BackupTarget` after upgrading from either v0.1 or v0.2.
Delete Longhorn components.
For Longhorn `v0.1` (most likely installed using Longhorn App in Rancher 2.0):
```
kubectl delete -f https://raw.githubusercontent.com/llparse/longhorn/v0.1/deploy/uninstall-for-upgrade.yaml
```
For Longhorn `v0.2`:
```
kubectl delete -f https://raw.githubusercontent.com/rancher/longhorn/v0.2/deploy/uninstall-for-upgrade.yaml
```
If both commands returned `Not found` for all components, Longhorn is probably
deployed in a different namespace. Determine which namespace is in use and
adjust `NAMESPACE` here accordingly:
```
NAMESPACE=<some_longhorn_namespace>
curl -sSfL https://raw.githubusercontent.com/rancher/longhorn/v0.1/deploy/uninstall-for-upgrade.yaml|sed "s#^\( *\)namespace: longhorn#\1namespace: ${NAMESPACE}#g" > longhorn.yaml
kubectl delete -f longhorn.yaml
```
## Backup Longhorn System
We're going to backup Longhorn CRD yaml to local directory, so we can restore or inspect them later.
### v0.1
User must backup the CRDs for v0.1 because we will change the default deploying namespace for Longhorn.
Check your backups to make sure Longhorn was running in namespace `longhorn`, otherwise change the value of `NAMESPACE` below.
```
NAMESPACE=longhorn
kubectl -n ${NAMESPACE} get volumes.longhorn.rancher.io -o yaml > longhorn-v0.1-backup-volumes.yaml
kubectl -n ${NAMESPACE} get engines.longhorn.rancher.io -o yaml > longhorn-v0.1-backup-engines.yaml
kubectl -n ${NAMESPACE} get replicas.longhorn.rancher.io -o yaml > longhorn-v0.1-backup-replicas.yaml
kubectl -n ${NAMESPACE} get settings.longhorn.rancher.io -o yaml > longhorn-v0.1-backup-settings.yaml
```
### v0.2
Check your backups to make sure Longhorn was running in namespace
`longhorn-system`, otherwise change the value of `NAMESPACE` below.
```
NAMESPACE=longhorn-system
kubectl -n ${NAMESPACE} get volumes.longhorn.rancher.io -o yaml > longhorn-v0.2-backup-volumes.yaml
kubectl -n ${NAMESPACE} get engines.longhorn.rancher.io -o yaml > longhorn-v0.2-backup-engines.yaml
kubectl -n ${NAMESPACE} get replicas.longhorn.rancher.io -o yaml > longhorn-v0.2-backup-replicas.yaml
kubectl -n ${NAMESPACE} get settings.longhorn.rancher.io -o yaml > longhorn-v0.2-backup-settings.yaml
```
## Delete CRDs in Different Namespace
This is only required for Rancher users running Longhorn App `v0.1`. Delete all
CRDs from your namespace which is `longhorn` by default.
```
NAMESPACE=longhorn
kubectl -n ${NAMESPACE} get volumes.longhorn.rancher.io -o yaml | sed "s/\- longhorn.rancher.io//g" | kubectl apply -f -
kubectl -n ${NAMESPACE} get engines.longhorn.rancher.io -o yaml | sed "s/\- longhorn.rancher.io//g" | kubectl apply -f -
kubectl -n ${NAMESPACE} get replicas.longhorn.rancher.io -o yaml | sed "s/\- longhorn.rancher.io//g" | kubectl apply -f -
kubectl -n ${NAMESPACE} get settings.longhorn.rancher.io -o yaml | sed "s/\- longhorn.rancher.io//g" | kubectl apply -f -
kubectl -n ${NAMESPACE} delete volumes.longhorn.rancher.io --all
kubectl -n ${NAMESPACE} delete engines.longhorn.rancher.io --all
kubectl -n ${NAMESPACE} delete replicas.longhorn.rancher.io --all
kubectl -n ${NAMESPACE} delete settings.longhorn.rancher.io --all
```
## Install Longhorn v0.3
### Installed with Longhorn App v0.1 in Rancher 2.x
For Rancher users who are running Longhorn v0.1, **do not click the upgrade button in the Rancher App.**
1. Delete the Longhorn App from `Catalog Apps` screen in Rancher UI.
2. Launch Longhorn App template version `0.3.0`.
3. Restore Longhorn System data. This step is required for Rancher users running Longhorn App `v0.1`.
Don't change the NAMESPACE variable below, since the newly installed Longhorn system will be installed in the `longhorn-system` namespace.
```
NAMESPACE=longhorn-system
sed "s#^\( *\)namespace: .*#\1namespace: ${NAMESPACE}#g" longhorn-v0.1-backup-settings.yaml | kubectl apply -f -
sed "s#^\( *\)namespace: .*#\1namespace: ${NAMESPACE}#g" longhorn-v0.1-backup-replicas.yaml | kubectl apply -f -
sed "s#^\( *\)namespace: .*#\1namespace: ${NAMESPACE}#g" longhorn-v0.1-backup-engines.yaml | kubectl apply -f -
sed "s#^\( *\)namespace: .*#\1namespace: ${NAMESPACE}#g" longhorn-v0.1-backup-volumes.yaml | kubectl apply -f -
```
### Installed without using Longhorn App v0.1
For Longhorn v0.2 users who are not using Rancher, follow
[the official Longhorn Deployment instructions](../README.md#deployment).
## Access UI and Set BackupTarget
Wait until the longhorn-ui and longhorn-manager pods are `Running`:
```
kubectl -n longhorn-system get pod -w
```
[Access the UI](../README.md#access-the-ui).
On `Setting > General`, set `Backup Target` to the backup target used in
the previous version. In our example, this is
`nfs://longhorn-test-nfs-svc.default:/opt/backupstore`.
## Upgrade Engine Images
Ensure all volumes are detached. If any are still attached, detach them now
and wait until they are in `Detached` state.
Select all the volumes using batch selection. Click batch operation button
`Upgrade Engine`, choose the only engine image available in the list. It's
the default engine shipped with the manager for this release.
## Attach Volumes
Now we will resume all workloads by reversing the changes we made to detach
the volumes. Any volume not part of a K8s workload or pod must be attached
manually.
## Note
Upgrade is always tricky. Keeping recent backups for volumes is critical. If anything goes wrong, you can restore the volume using the backup.
If you have any issues, please report it at
https://github.com/rancher/longhorn/issues and include your backup yaml files
as well as manager logs.

View File

@ -0,0 +1,50 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: longhorn-vol-pv
spec:
capacity:
storage: 2Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
csi:
driver: io.rancher.longhorn
fsType: ext4
volumeAttributes:
numberOfReplicas: '3'
staleReplicaTimeout: '30'
volumeHandle: existing-longhorn-volume
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: longhorn-vol-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
volumeName: longhorn-vol-pv
---
apiVersion: v1
kind: Pod
metadata:
name: volume-test
namespace: default
spec:
containers:
- name: volume-test
image: nginx:stable-alpine
imagePullPolicy: IfNotPresent
volumeMounts:
- name: vol
mountPath: /data
ports:
- containerPort: 80
volumes:
- name: vol
persistentVolumeClaim:
claimName: longhorn-vol-pvc

View File

@ -0,0 +1,41 @@
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: longhorn
provisioner: rancher.io/longhorn
parameters:
numberOfReplicas: '3'
staleReplicaTimeout: '30'
reclaimPolicy: Delete
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: longhorn-vol-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
storageClassName: longhorn
---
apiVersion: v1
kind: Pod
metadata:
name: volume-test
namespace: default
spec:
containers:
- name: volume-test
image: nginx:stable-alpine
imagePullPolicy: IfNotPresent
volumeMounts:
- name: vol
mountPath: /data
ports:
- containerPort: 80
volumes:
- name: vol
persistentVolumeClaim:
claimName: longhorn-vol-pvc

View File

@ -0,0 +1,43 @@
apiVersion: v1
kind: Pod
metadata:
labels:
app: flexvol-baseimage
name: flexvol-baseimage
namespace: default
spec:
containers:
- name: flexvol-baseimage
image: nginx:stable-alpine
imagePullPolicy: IfNotPresent
volumeMounts:
- name: flexvol
mountPath: /usr/share/nginx/html
ports:
- containerPort: 80
volumes:
- name: flexvol
flexVolume:
driver: rancher.io/longhorn
options:
size: 32Mi
numberOfReplicas: "3"
staleReplicaTimeout: "20"
fromBackup: ""
baseImage: rancher/longhorn-test:baseimage-ext4
---
apiVersion: v1
kind: Service
metadata:
labels:
app: flexvol-baseimage
name: flexvol-baseimage
namespace: default
spec:
ports:
- name: web
port: 80
targetPort: 80
selector:
app: flexvol-baseimage
type: LoadBalancer

View File

@ -0,0 +1,63 @@
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
labels:
app: provisioner-baseimage
name: baseimage-storageclass
provisioner: rancher.io/longhorn
parameters:
numberOfReplicas: '3'
staleReplicaTimeout: '30'
fromBackup: ''
baseImage: rancher/longhorn-test:baseimage-ext4
---
apiVersion: v1
kind: Service
metadata:
labels:
app: provisioner-baseimage
name: provisioner-baseimage-service
spec:
ports:
- port: 80
name: web
selector:
app: provisioner-baseimage
type: LoadBalancer
---
apiVersion: apps/v1beta2
kind: StatefulSet
metadata:
labels:
app: provisioner-baseimage
name: provisioner-baseimage-statefulset
spec:
selector:
matchLabels:
app: provisioner-baseimage
serviceName: provisioner-baseimage
replicas: 2
template:
metadata:
labels:
app: provisioner-baseimage
spec:
terminationGracePeriodSeconds: 10
containers:
- name: nginx
image: nginx:stable-alpine
imagePullPolicy: IfNotPresent
volumeMounts:
- name: baseimage-vol
mountPath: /usr/share/nginx/html
ports:
- containerPort: 80
volumeClaimTemplates:
- metadata:
name: baseimage-vol
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: baseimage-storageclass
resources:
requests:
storage: 32Mi

184
scripts/environment_check.sh Executable file
View File

@ -0,0 +1,184 @@
#!/bin/bash
dependencies() {
local targets=($@)
local allFound=true
for ((i=0; i<${#targets[@]}; i++)); do
local target=${targets[$i]}
if [ "$(which $target)" == "" ]; then
allFound=false
echo Not found: $target
fi
done
if [ "$allFound" == "false" ]; then
echo "Please install missing dependencies."
exit 2
fi
}
create_ds() {
cat <<EOF > $TEMP_DIR/environment_check.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
labels:
app: longhorn-environment-check
name: longhorn-environment-check
spec:
selector:
matchLabels:
app: longhorn-environment-check
template:
metadata:
labels:
app: longhorn-environment-check
spec:
containers:
- name: longhorn-environment-check
image: busybox
args: ["/bin/sh", "-c", "sleep 1000000000"]
volumeMounts:
- name: mountpoint
mountPath: /tmp/longhorn-environment-check
mountPropagation: Bidirectional
securityContext:
privileged: true
volumes:
- name: mountpoint
hostPath:
path: /tmp/longhorn-environment-check
EOF
kubectl create -f $TEMP_DIR/environment_check.yaml
}
create_pod() {
cat <<EOF > $TEMP_DIR/detect-flexvol-dir.yaml
apiVersion: v1
kind: Pod
metadata:
name: detect-flexvol-dir
spec:
containers:
- name: detect-flexvol-dir
image: busybox
command: ["/bin/sh"]
args:
- -c
- |
find_kubelet_proc() {
for proc in \`find /proc -type d -maxdepth 1\`; do
if [ ! -f \$proc/cmdline ]; then
continue
fi
if [[ "\$(cat \$proc/cmdline | tr '\000' '\n' | head -n1 | tr '/' '\n' | tail -n1)" == "kubelet" ]]; then
echo \$proc
return
fi
done
}
get_flexvolume_path() {
proc=\$(find_kubelet_proc)
if [ "\$proc" != "" ]; then
path=\$(cat \$proc/cmdline | tr '\000' '\n' | grep volume-plugin-dir | tr '=' '\n' | tail -n1)
if [ "\$path" == "" ]; then
echo '/usr/libexec/kubernetes/kubelet-plugins/volume/exec/'
else
echo \$path
fi
return
fi
echo 'no kubelet process found, dunno'
}
get_flexvolume_path
securityContext:
privileged: true
hostPID: true
restartPolicy: Never
EOF
kubectl create -f $TEMP_DIR/detect-flexvol-dir.yaml
}
cleanup() {
echo "cleaning up detection workloads..."
kubectl delete -f $TEMP_DIR/environment_check.yaml &
a=$!
kubectl delete -f $TEMP_DIR/detect-flexvol-dir.yaml &
b=$!
wait $a
wait $b
rm -rf $TEMP_DIR
echo "clean up completed"
}
wait_pod_ready() {
while true; do
local pod=$(kubectl get po/detect-flexvol-dir -o json)
local phase=$(echo $pod | jq -r .status.phase)
if [ "$phase" == "Succeeded" ]; then
echo "pod/detect-flexvol-dir completed"
return
fi
echo "waiting for pod/detect-flexvol-dir to finish"
sleep 3
done
}
validate_pod() {
flexvol_path=$(kubectl logs detect-flexvol-dir)
echo -e "\n FLEXVOLUME_DIR=\"${flexvol_path}\"\n"
}
wait_ds_ready() {
while true; do
local ds=$(kubectl get ds/longhorn-environment-check -o json)
local numberReady=$(echo $ds | jq .status.numberReady)
local desiredNumberScheduled=$(echo $ds | jq .status.desiredNumberScheduled)
if [ "$desiredNumberScheduled" == "$numberReady" ] && [ "$desiredNumberScheduled" != "0" ]; then
echo "all pods ready ($numberReady/$desiredNumberScheduled)"
return
fi
echo "waiting for pods to become ready ($numberReady/$desiredNumberScheduled)"
sleep 3
done
}
validate_ds() {
local allSupported=true
local pods=$(kubectl -l app=longhorn-environment-check get po -o json)
for ((i=0; i<1; i++)); do
local pod=$(echo $pods | jq .items[$i])
local nodeName=$(echo $pod | jq -r .spec.nodeName)
local mountPropagation=$(echo $pod | jq -r '.spec.containers[0].volumeMounts[] | select(.name=="mountpoint") | .mountPropagation')
if [ "$mountPropagation" != "Bidirectional" ]; then
allSupported=false
echo "node $nodeName: MountPropagation DISABLED"
fi
done
if [ "$allSupported" != "true" ]; then
echo
echo " MountPropagation is disabled on at least one node."
echo " As a result, CSI driver and Base image cannot be supported."
echo
exit 1
else
echo -e "\n MountPropagation is enabled!\n"
fi
}
dependencies kubectl jq mktemp
TEMP_DIR=$(mktemp -d)
trap cleanup EXIT
create_pod
create_ds
wait_pod_ready
wait_ds_ready
validate_pod
validate_ds
exit 0