Merge pull request #443 from rancher/v0.4.1

V0.4.1
This commit is contained in:
Sheng Yang 2019-03-22 20:11:35 -07:00 committed by GitHub
commit 7691a575a5
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
7 changed files with 247 additions and 9 deletions

View File

@ -10,7 +10,7 @@ You can read more details of Longhorn and its design [here](http://rancher.com/m
Longhorn is a work in progress. It's an alpha quality software at the moment. We appreciate your comments as we continue to work on it.
The latest release of Longhorn is **v0.4.0**, shipped with Longhorn Engine **v0.4.0** as the default engine image.
The latest release of Longhorn is **v0.4.1**, shipped with Longhorn Engine **v0.4.1** as the default engine image.
## Source code
Longhorn is 100% open source software. Project source code is spread across a number of repos:
@ -264,6 +264,8 @@ Longhorn will always try to maintain at least given number of healthy replicas f
### [Google Kubernetes Engine](./docs/gke.md)
### [Upgrade](./docs/upgrade.md)
### [Deal with Kubernetes node failure](./docs/node-failure.md)
### [Use CSI driver on RancherOS + RKE](./docs/rancheros.md)
### [Restore a backup to an image file](./docs/restore-to-file.md)
## Troubleshooting
You can click `Generate Support Bundle` link at the bottom of the UI to download a zip file contains Longhorn related configuration and logs.

View File

@ -181,7 +181,7 @@ spec:
spec:
containers:
- name: longhorn-manager
image: rancher/longhorn-manager:v0.4.0
image: rancher/longhorn-manager:v0.4.1
imagePullPolicy: Always
securityContext:
privileged: true
@ -190,9 +190,9 @@ spec:
- -d
- daemon
- --engine-image
- rancher/longhorn-engine:v0.4.0
- rancher/longhorn-engine:v0.4.1
- --manager-image
- rancher/longhorn-manager:v0.4.0
- rancher/longhorn-manager:v0.4.1
- --service-account
- longhorn-service-account
ports:
@ -269,7 +269,7 @@ spec:
spec:
containers:
- name: longhorn-ui
image: rancher/longhorn-ui:v0.4.0
image: rancher/longhorn-ui:v0.4.1
ports:
- containerPort: 8000
env:
@ -308,23 +308,26 @@ spec:
spec:
initContainers:
- name: wait-longhorn-manager
image: rancher/longhorn-manager:v0.4.0
image: rancher/longhorn-manager:v0.4.1
command: ['sh', '-c', 'while [ $(curl -m 1 -s -o /dev/null -w "%{http_code}" http://longhorn-backend:9500/v1) != "200" ]; do echo waiting; sleep 2; done']
containers:
- name: longhorn-driver-deployer
image: rancher/longhorn-manager:v0.4.0
image: rancher/longhorn-manager:v0.4.1
imagePullPolicy: Always
command:
- longhorn-manager
- -d
- deploy-driver
- --manager-image
- rancher/longhorn-manager:v0.4.0
- rancher/longhorn-manager:v0.4.1
- --manager-url
- http://longhorn-backend:9500/v1
# manually choose "flexvolume" or "csi"
#- --driver
#- flexvolume
# manually set root directory for csi
#- --kubelet-root-dir
#- /var/lib/rancher/k3s/agent/kubelet
env:
- name: POD_NAMESPACE
valueFrom:

64
docs/rancheros.md Normal file
View File

@ -0,0 +1,64 @@
# Longhorn CSI on RancherOS + RKE
## Requirements
1. Kubernetes v1.11 or higher.
2. Longhorn v0.4.1 or higher.
3. RancherOS Ubuntu console.
## Instruction
### For Kubernetes v1.11 only
The following step is not needed for Kubernetes v1.12+.
Add extra_binds for kubelet in RKE `cluster.yml`:
```
services:
kubelet:
extra_binds:
- "/opt/rke/var/lib/kubelet/plugins:/var/lib/kubelet/plugins"
```
### For each node:
#### 1. Switch to ubuntu console
`sudo ros console switch ubuntu`, then type `y`
#### 2. Install open-iscsi for each node.
```
sudo su
apt update
apt install -y open-iscsi
```
#### 3. Modify configuration for iscsi.
1. Open config file `/etc/iscsi/iscsid.conf`
2. Comment `iscsid.startup = /bin/systemctl start iscsid.socket`
3. Uncomment `iscsid.startup = /sbin/iscsid`
## Background
CSI doesn't work with RancherOS + RKE before Longhorn v0.4.1. The reason is:
1. RancherOS sets argument `root-dir=/opt/rke/var/lib/kubelet` for kubelet, , which is different from the default value `/var/lib/kubelet`.
2. **For k8s v1.12+**
Kubelet will detect the `csi.sock` according to argument `<--kubelet-registration-path>` passed in by Kubernetes CSI driver-registrar, and `<drivername>-reg.sock` (for Longhorn, it's `io.rancher.longhorn-reg.sock`) on kubelet path `<root-dir>/plugins`.
**For k8s v1.11**
Kubelet will find both sockets on kubelet path `/var/lib/kubelet/plugins`.
3. By default, Longhorn CSI driver create and expose these 2 sock files on host path `/var/lib/kubelet/plugins`.
4. Then kubelet cannot find `<drivername>-reg.sock`, so CSI driver doesn't work.
5. Furthermore, kubelet will instruct CSI plugin to mount Longhorn volume on `<root-dir>/pods/<pod-name>/volumes/kubernetes.io~csi/<volume-name>/mount`.
But this path inside CSI plugin container won't be binded mount on host path. And the mount operation for Longhorn volume is meaningless.
Hence Kubernetes cannot connect to Longhorn using CSI driver.
## Reference
https://github.com/kubernetes-csi/driver-registrar

27
docs/restore-to-file.md Normal file
View File

@ -0,0 +1,27 @@
# Use command restore-to-file
This command gives users the ability to restore a backup to a `raw` image or a `qcow2` image. If the backup is based on a backing file, users should provide the backing file as a `qcow2` image with `--backing file` parameter.
## Instruction
1. Copy the yaml template
1.1 Volume has no base image: Make a copy of `examples/restore_to_file.yaml.template` as e.g. `restore.yaml`.
1.2 Volume has a base image: Make a copy of `examples/restore_to_file_with_base_image.yaml.template` as e.g. `restore.yaml`, and set argument `backing-file` by replacing `<BASE_IMAGE>` with your base image, e.g. `rancher/longhorn-test:baseimage-ext4`.
2. Set the node which the output file should be placed on by replacing `<NODE_NAME>`, e.g. `node1`.
3. Specify the host path of output file by modifying field `hostpath` of volume `disk-directory`. By default the directory is `/tmp/restore/`.
4. Set the first argument (backup url) by replacing `<BACKUP_URL>`, e.g. `s3://backupbucket@us-east-1/backupstore?backup=backup-bd326da2c4414b02&volume=volumeexamplename`. Do not delete `''`.
5. Set argument `output-file` by replacing `<OUTPUT_FILE>`, e.g. `volume.raw` or `volume.qcow2`.
6. Set argument `output-format` by replacing `<OUTPUT_FORMAT>`. Now support `raw` or `qcow2` only.
7. Set S3 Credential Secret by replacing `<S3_SECRET_NAME>`, e.g. `minio-secret`.
8. Execute the yaml using e.g. `kubectl create -f restore.yaml`.
9. Watching the result using `kubectl -n longhorn-system get pod restore-to-file -w`
After the pod status changed to `Completed`, you should able to find `<OUTPUT_FILE>` at e.g. `/tmp/restore` on the `<NODE_NAME>`.

View File

@ -0,0 +1,48 @@
apiVersion: v1
kind: Pod
metadata:
name: restore-to-file
namespace: longhorn-system
spec:
nodeName: <NODE_NAME>
containers:
- name: restore-to-file
command:
# set restore-to-file arguments here
- /bin/sh
- -c
- longhorn backup restore-to-file
'<BACKUP_URL>'
--output-file '/tmp/restore/<OUTPUT_FILE>'
--output-format <OUTPUT_FORMAT>
# the version of longhorn engine should be v0.4.1 or higher
image: rancher/longhorn-engine:v0.4.1
imagePullPolicy: IfNotPresent
securityContext:
privileged: true
volumeMounts:
- name: disk-directory
mountPath: /tmp/restore # the argument <output-file> should be in this directory
env:
# set Backup Target Credential Secret here.
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: <S3_SECRET_NAME>
key: AWS_ACCESS_KEY_ID
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: <S3_SECRET_NAME>
key: AWS_SECRET_ACCESS_KEY
- name: AWS_ENDPOINTS
valueFrom:
secretKeyRef:
name: <S3_SECRET_NAME>
key: AWS_ENDPOINTS
volumes:
# the output file can be found on this host path
- name: disk-directory
hostPath:
path: /tmp/restore
restartPolicy: Never

View File

@ -0,0 +1,94 @@
apiVersion: v1
kind: Pod
metadata:
name: restore-to-file
namespace: longhorn-system
spec:
nodeName: <NODE_NAME>
initContainers:
- name: prime-base-image
# set base image here
command:
- /bin/sh
- -c
- echo primed-base-image
# set base image here
image: <BASE_IMAGE>
imagePullPolicy: Always
containers:
- name: base-image
command:
- /bin/sh
- -c
- mkdir -p /share/base_image &&
mount --bind /base_image/ /share/base_image &&
echo base image mounted at /share/base_image &&
trap 'umount /share/base_image && echo unmounted' TERM &&
while true; do $(ls /talk/done 2>&1); if [ $? -eq 0 ]; then break;
fi; echo waiting; sleep 1; done;
umount /share/base_image && echo unmounted
# set base image here
image: <BASE_IMAGE>
imagePullPolicy: IfNotPresent
securityContext:
privileged: true
volumeMounts:
- name: share
mountPath: /share
mountPropagation: Bidirectional
- name: talk
mountPath: /talk
- name: restore-to-file
command:
# set restore-to-file arguments here
- /bin/sh
- -c
- while true; do list=$(ls /share/base_image/* 2>&1); if [ $? -eq 0 ]; then break;
fi; echo waiting; sleep 1; done; echo Directory found $list;
longhorn backup restore-to-file
'<BACKUP_URL>'
--backing-file $list
--output-file '/tmp/restore/<OUTPUT_FILE>'
--output-format <OUTPUT_FORMAT>
&& touch /talk/done && chmod 777 /talk/done && echo created /share/done
# the version of longhorn engine should be v0.4.1 or higher
image: rancher/longhorn-engine:v0.4.1
imagePullPolicy: IfNotPresent
securityContext:
privileged: true
volumeMounts:
- name: share
mountPath: /share
mountPropagation: HostToContainer
readOnly: true
- name: talk
mountPath: /talk
- name: disk-directory
mountPath: /tmp/restore # the argument <output-file> should be in this directory
env:
# set Backup Target Credential Secret here.
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: <S3_SECRET_NAME>
key: AWS_ACCESS_KEY_ID
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: <S3_SECRET_NAME>
key: AWS_SECRET_ACCESS_KEY
- name: AWS_ENDPOINTS
valueFrom:
secretKeyRef:
name: <S3_SECRET_NAME>
key: AWS_ENDPOINTS
volumes:
- name: share
emptyDir: {}
- name: talk
emptyDir: {}
# the output file can be found on this host path
- name: disk-directory
hostPath:
path: /tmp/restore
restartPolicy: Never

View File

@ -12,7 +12,7 @@ spec:
spec:
containers:
- name: longhorn-uninstall
image: rancher/longhorn-manager:v0.4.0
image: rancher/longhorn-manager:v0.4.1
imagePullPolicy: Always
command:
- longhorn-manager