The traffic from/to the longhorn webhook server is the kube-apiserver.
The only way we could add restriction is to add the network policy of
the ingress port because we can't know each Kubernetes distro default
kube-apiserver Pod's label. Therefore, we can't add the label selector
in the network policy rule to restrict the traffic that comes from the
kube-apiserver is able to access to the longhorn webhook server.
Longhorn 3513
Signed-off-by: JenTing Hsiao <jenting.hsiao@suse.com>
(cherry picked from commit 769e85bc80b6351a081a79ddf83ab181cf956e23)
1. Use different names for the 2 roles
(and the related role bindings)
2. Make sure the role binding namespace is the same as the
namespace where nfs provisioner is deployed
Signed-off-by: Shuo Wu <shuo@rancher.com>
The test deployment creates 4 replicas that continously, write the
current date time once a second into the file `/mnt/nfs-test/test.log`
This is a good test for an rwx volume. Since it replicates an append
only log that is used by multiple pods.
Signed-off-by: Joshua Moody <joshua.moody@rancher.com>
So that on a delete & recreate of the service the previous pv's still
point to this nfs-provisioner. We cannot use the hostname since the actual
host doesn't know how to resolve service addresses inside of the cluster.
To support this would require the installation of kube-dns and
modification to the /etc/resolve.conf file on each host.
Signed-off-by: Joshua Moody <joshua.moody@rancher.com>
This makes the nfs client use a new src port for each tcp reconnect.
This way after a crash the faulty connection isn't kept alive in the
connection cache (nat). This should allow to resolve the cluster ip to
the new destination pod ip.
Signed-off-by: Joshua Moody <joshua.moody@rancher.com>
Add tolerations so that nfs provisioner pod gets evicted from a failing
node after 60 second + 30 grace period (relevant for va recovery policy).
Add liveness + readyness probe, so that no traffic gets routed to a failed
nfs server. Disable device based fsids (major:minor) since our block device
mapping can change from node to node, which makes the id's unstable.
Signed-off-by: Joshua Moody <joshua.moody@rancher.com>
Author: Sheng Yang <sheng.yang@rancher.com>
Date: Sat Feb 29 18:29:21 2020 -0800
example: Explain the magic number of staleReplicaTimeout
Signed-off-by: Sheng Yang <sheng.yang@rancher.com>
Signed-off-by: Sheng Yang <sheng.yang@rancher.com>
I can't find a single "spec" for the parameters, so I figure it's best to explain it in the examples and documentation.
Signed-off-by: ted <ted@timmons.me>
commit 8e060dce288c4dfa054a4b0a188559e624aeb3c8
Author: James Oliver <joliver@rancher.com>
Date: Fri Aug 31 16:09:06 2018 -0700
Add script to tear down Longhorn system
Longhorn manager: 5871843d78168db37d156460a01c56d8620b6f8e
Changes:
1. Complete rewrite of orchestration of Longhorn manager
2. Updated the deployment model of Longhorn driver
This breaks the compatiability with previous version. Please clean up and
re-deploy.