Update docs relevant to FlexVolume dirpath setting

This commit is contained in:
James Oliver 2018-09-11 17:59:03 -07:00 committed by Sheng Yang
parent 57d521027f
commit 581b54651a
6 changed files with 30 additions and 140 deletions

View File

@ -16,7 +16,7 @@ The latest release of Longhorn is v0.3.0.
Longhorn is 100% open source software. Project source code is spread across a number of repos:
1. Longhorn engine -- Core controller/replica logic https://github.com/rancher/longhorn-engine
1. Longhorn manager -- Longhorn orchestration, includes Flexvolume driver for Kubernetes https://github.com/rancher/longhorn-manager
1. Longhorn manager -- Longhorn orchestration, includes FlexVolume driver for Kubernetes https://github.com/rancher/longhorn-manager
1. Longhorn UI -- Dashboard https://github.com/rancher/longhorn-ui
# Demo
@ -46,23 +46,17 @@ curl -sSfL https://raw.githubusercontent.com/rancher/longhorn/master/scripts/env
```
Example result:
```
pod "detect-flexvol-dir" created
daemonset.apps "longhorn-environment-check" created
waiting for pod/detect-flexvol-dir to finish
pod/detect-flexvol-dir completed
daemonset.apps/longhorn-environment-check created
waiting for pods to become ready (0/3)
all pods ready (3/3)
FLEXVOLUME_DIR="/home/kubernetes/flexvolume"
MountPropagation is enabled!
cleaning up detection workloads...
pod "detect-flexvol-dir" deleted
cleaning up...
daemonset.apps "longhorn-environment-check" deleted
clean up completed
clean up complete
```
Please make a note of `Flexvolume Path` and `MountPropagation` state above.
Please make a note of `MountPropagation` feature gate status.
### Requirement for the CSI driver
@ -86,15 +80,10 @@ The `Server Version` should be `v1.10` or above.
2. The result of environment check script should contain `MountPropagation is enabled!`.
### Requirement for the Flexvolume driver
### Requirement for the FlexVolume driver
1. Kubernetes v1.8+
2. Make sure `curl`, `findmnt`, `grep`, `awk` and `blkid` has been installed in the every node of the Kubernetes cluster.
3. User need to know the volume plugin directory in order to setup the driver correctly.
1. The correct directory should be reported by the environment check script.
2. Rancher RKE: `/var/lib/kubelet/volumeplugins`
3. Google GKE: `/home/kubernetes/flexvolume`
4. For any other distro, use the value reported by the environment check script.
# Upgrading
@ -104,24 +93,15 @@ For instructions on how to upgrade Longhorn App v0.1 or v0.2 to v0.3, [see this
Create the deployment of Longhorn in your Kubernetes cluster is straightforward.
If CSI is supported (as stated above) you can just do:
```
kubectl apply -f https://raw.githubusercontent.com/rancher/longhorn/master/deploy/longhorn.yaml
```
If you're using Flexvolume driver with Kubernetes Distro other than RKE, replace the value of $FLEXVOLUME_DIR in the following command with your own Flexvolume Directory as specified above.
```
FLEXVOLUME_DIR=<FLEXVOLUME_DIR>
```
Then run
```
curl -s https://raw.githubusercontent.com/rancher/longhorn/master/deploy/longhorn.yaml|sed "s#^\( *\)value: \"/var/lib/kubelet/volumeplugins\"#\1value: \"${FLEXVOLUME_DIR}\"#g" > longhorn.yaml
kubectl apply -f longhorn.yaml
```
For Google Kubernetes Engine (GKE) users, see [here](#google-kubernetes-engine) before proceed.
For Google Kubernetes Engine (GKE) users, see [here](#google-kubernetes-engine) before proceeding.
Longhorn manager and Longhorn driver will be deployed as daemonsets in a separate namespace called `longhorn-system`, as you can see in the yaml file.
When you see those pods has started correctly as follows, you've deployed the Longhorn successfully.
When you see those pods have started correctly as follows, you've deployed Longhorn successfully.
Deployed with CSI driver:
```
@ -141,7 +121,7 @@ longhorn-manager-8kqf4 1/1 Running 0 6h
longhorn-manager-kln4h 1/1 Running 0 6h
longhorn-ui-f849dcd85-cgkgg 1/1 Running 0 5d
```
Or with Flexvolume driver
Or with FlexVolume driver:
```
# kubectl -n longhorn-system get pod
NAME READY STATUS RESTARTS AGE
@ -240,7 +220,7 @@ User can revert to any previous taken snapshot using the UI. Since Longhorn is a
Longhorn is a `crash-consistent` block storage solution.
It's normal for the OS to keep content in the cache before writing into the block layer. However, it also means if the all the replicas are down, then the Longhorn may not contains the immediate change before the shutdown, since the content was kept in the OS level cache and hadn't transfered to Longhorn system yet. It's similar to if your desktop was down due to a power outage, after resuming the power, you may find some weird files in the hard drive.
It's normal for the OS to keep content in the cache before writing into the block layer. However, it also means if the all the replicas are down, then Longhorn may not contain the immediate change before the shutdown, since the content was kept in the OS level cache and hadn't transfered to Longhorn system yet. It's similar to if your desktop was down due to a power outage, after resuming the power, you may find some weird files in the hard drive.
To force the data being written to the block layer at any given moment, the user can run `sync` command on the node manually, or umount the disk. OS would write the content from the cache to the block layer in either situation.
@ -251,7 +231,7 @@ A corresponding snapshot is needed for creating a backup. And user can choose to
A backupstore is a NFS server or S3 compatible server.
A backup target represents a backupstore in the Longhorn. The backup target can be set at `Settings/General/BackupTarget`
A backup target represents a backupstore in Longhorn. The backup target can be set at `Settings/General/BackupTarget`
#### Setup AWS S3 backupstore
1. Create a new bucket in AWS S3.

View File

@ -336,9 +336,9 @@ spec:
fieldRef:
fieldPath: spec.serviceAccountName
# For auto detection, leave this parameter unset
#- name: FLEXVOLUME_DIR
- name: FLEXVOLUME_DIR
# FOR RKE
#value: "/var/lib/kubelet/volumeplugins"
value: "/var/lib/kubelet/volumeplugins"
# FOR GKE
#value: "/home/kubernetes/flexvolume/"
serviceAccountName: longhorn-service-account

View File

@ -13,7 +13,7 @@ Longhorn is 100% open source software. Project source code is spread across a nu
## Prerequisites
1. Docker v1.13+
2. Kubernetes v1.8+ cluster with 1 or more nodes and Mount Propagation feature enabled. If your Kubernetes cluster was provisioned by Rancher v2.0.7+ or later, MountPropagation feature is enabled by default. [Check your Kubernetes environment now](https://github.com/rancher/longhorn#environment-check-script). If MountPropagation is disabled, the Kubernetes Flexvolume driver will be deployed instead of the default CSI driver and the user would need to set the FLEXVOLUME DIR parameter correctly in the chart, based on the result of the environment check. Base Image feature will also be disabled if MountPropagation is disabled.
2. Kubernetes v1.8+ cluster with 1 or more nodes and Mount Propagation feature enabled. If your Kubernetes cluster was provisioned by Rancher v2.0.7+ or later, MountPropagation feature is enabled by default. [Check your Kubernetes environment now](https://github.com/rancher/longhorn#environment-check-script). If MountPropagation is disabled, the Kubernetes Flexvolume driver will be deployed instead of the default CSI driver. Base Image feature will also be disabled if MountPropagation is disabled.
3. Make sure `curl`, `findmnt`, `grep`, `awk` and `blkid` has been installed in all nodes of the Kubernetes cluster.
4. Make sure `open-iscsi` has been installed in all nodes of the Kubernetes cluster. For GKE, recommended Ubuntu as guest OS image since it contains `open-iscsi` already.
@ -52,13 +52,13 @@ done
### Volume can be attached/detached from UI, but Kubernetes Pod/StatefulSet etc cannot use it
Check if volume plugin directory has been set correctly.
Check if volume plugin directory has been set correctly. This is automatically detected unless user explicitly set it.
By default, Kubernetes use `/usr/libexec/kubernetes/kubelet-plugins/volume/exec/` as the directory for volume plugin drivers, as stated in the [official document](https://github.com/kubernetes/community/blob/master/contributors/devel/flexvolume.md#prerequisites).
By default, Kubernetes uses `/usr/libexec/kubernetes/kubelet-plugins/volume/exec/`, as stated in the [official document](https://github.com/kubernetes/community/blob/master/contributors/devel/flexvolume.md#prerequisites).
But some vendors may choose to change the directory due to various reasons. For example, GKE uses `/home/kubernetes/flexvolume` instead.
Some vendors choose to change the directory for various reasons. For example, GKE uses `/home/kubernetes/flexvolume` instead.
User can find the correct directory by running [the environment check script](https://github.com/rancher/longhorn#environment-check-script).
If you don't know what the correct directory is for your cluster, please leave the Chart question blank.
---
Please see [link](https://github.com/rancher/longhorn) for more information.

View File

@ -1,25 +1,12 @@
# Google Kubernetes Engine
The user must uses `Ubuntu` as the OS on the node, instead of `Container-Optimized OS(default)`, since the latter doesn't support `open-iscsi` which is required by Longhorn.
1. GKE clusters must use `Ubuntu` OS instead of `Container-Optimized` OS, in order to satisfy Longhorn `open-iscsi` dependency.
The configuration yaml will be slight different for Google Kubernetes Engine (GKE):
1. GKE requires user to manually claim himself as cluster admin to enable RBAC. User need to execute following command before create the Longhorn system using yaml files.
2. GKE requires user to manually claim himself as cluster admin to enable RBAC. Before installing Longhorn, run the following command:
```
kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin --user=<name@example.com>
```
In which `name@example.com` is the user's account name in GCE, and it's case sensitive. See [here](https://cloud.google.com/kubernetes-engine/docs/how-to/role-based-access-control) for details.
2. The default Flexvolume plugin directory is different with GKE 1.8+, which is at `/home/kubernetes/flexvolume`. User need to use following command instead:
```
FLEXVOLUME_DIR="/home/kubernetes/flexvolume/"
curl -s https://raw.githubusercontent.com/rancher/longhorn/master/deploy/longhorn.yaml|sed "s#^\( *\)value: \"/var/lib/kubelet/volumeplugins\"#\1value: \"${FLEXVOLUME_DIR}\"#g" > longhorn.yaml
kubectl create -f longhorn.yaml
```
See [Troubleshooting](./troubleshooting.md) for details.
where `name@example.com` is the user's account name in GCE, and it's case sensitive. See [this document](https://cloud.google.com/kubernetes-engine/docs/how-to/role-based-access-control) for more information.

View File

@ -3,16 +3,14 @@
## Common issues
### Volume can be attached/detached from UI, but Kubernetes Pod/StatefulSet etc cannot use it
Check if volume plugin directory has been set correctly.
Check if volume plugin directory has been set correctly. This is automatically detected unless user explicitly set it.
By default, Kubernetes use `/usr/libexec/kubernetes/kubelet-plugins/volume/exec/` as the directory for volume plugin drivers, as stated in the [official document](https://github.com/kubernetes/community/blob/master/contributors/devel/flexvolume.md#prerequisites).
By default, Kubernetes uses `/usr/libexec/kubernetes/kubelet-plugins/volume/exec/`, as stated in the [official document](https://github.com/kubernetes/community/blob/master/contributors/devel/flexvolume.md#prerequisites).
But some vendors may choose to change the directory due to various reasons. For example, GKE uses `/home/kubernetes/flexvolume`, and RKE uses `/var/lib/kubelet/volumeplugins`.
Some vendors choose to change the directory for various reasons. For example, GKE uses `/home/kubernetes/flexvolume` instead.
User can find the correct directory by running `ps aux|grep kubelet` on the host and check the `--volume-plugin-dir` parameter. If there is none, the default `/usr/libexec/kubernetes/kubelet-plugins/volume/exec/` will be used.
User can also use the [environment check script](../README.md#environment-check-script) for this purpose.
## Troubleshooting guide
There are a few compontents in the Longhorn. Manager, Engine, Driver and UI. All of those components runnings as pods in the `longhorn-system` namespace by default inside the Kubernetes cluster.

View File

@ -51,83 +51,11 @@ EOF
kubectl create -f $TEMP_DIR/environment_check.yaml
}
create_pod() {
cat <<EOF > $TEMP_DIR/detect-flexvol-dir.yaml
apiVersion: v1
kind: Pod
metadata:
name: detect-flexvol-dir
spec:
containers:
- name: detect-flexvol-dir
image: busybox
command: ["/bin/sh"]
args:
- -c
- |
find_kubelet_proc() {
for proc in \`find /proc -type d -maxdepth 1\`; do
if [ ! -f \$proc/cmdline ]; then
continue
fi
if [[ "\$(cat \$proc/cmdline | tr '\000' '\n' | head -n1 | tr '/' '\n' | tail -n1)" == "kubelet" ]]; then
echo \$proc
return
fi
done
}
get_flexvolume_path() {
proc=\$(find_kubelet_proc)
if [ "\$proc" != "" ]; then
path=\$(cat \$proc/cmdline | tr '\000' '\n' | grep volume-plugin-dir | tr '=' '\n' | tail -n1)
if [ "\$path" == "" ]; then
echo '/usr/libexec/kubernetes/kubelet-plugins/volume/exec/'
else
echo \$path
fi
return
fi
echo 'no kubelet process found, dunno'
}
get_flexvolume_path
securityContext:
privileged: true
hostPID: true
restartPolicy: Never
EOF
kubectl create -f $TEMP_DIR/detect-flexvol-dir.yaml
}
cleanup() {
echo "cleaning up detection workloads..."
kubectl delete -f $TEMP_DIR/environment_check.yaml &
a=$!
kubectl delete -f $TEMP_DIR/detect-flexvol-dir.yaml &
b=$!
wait $a
wait $b
echo "cleaning up..."
kubectl delete -f $TEMP_DIR/environment_check.yaml
rm -rf $TEMP_DIR
echo "clean up completed"
}
wait_pod_ready() {
while true; do
local pod=$(kubectl get po/detect-flexvol-dir -o json)
local phase=$(echo $pod | jq -r .status.phase)
if [ "$phase" == "Succeeded" ]; then
echo "pod/detect-flexvol-dir completed"
return
fi
echo "waiting for pod/detect-flexvol-dir to finish"
sleep 3
done
}
validate_pod() {
flexvol_path=$(kubectl logs detect-flexvol-dir)
echo -e "\n FLEXVOLUME_DIR=\"${flexvol_path}\"\n"
echo "clean up complete"
}
wait_ds_ready() {
@ -175,10 +103,7 @@ validate_ds() {
dependencies kubectl jq mktemp
TEMP_DIR=$(mktemp -d)
trap cleanup EXIT
create_pod
create_ds
wait_pod_ready
wait_ds_ready
validate_pod
validate_ds
exit 0