1. Use different names for the 2 roles
(and the related role bindings)
2. Make sure the role binding namespace is the same as the
namespace where nfs provisioner is deployed
Signed-off-by: Shuo Wu <shuo@rancher.com>
The test deployment creates 4 replicas that continously, write the
current date time once a second into the file `/mnt/nfs-test/test.log`
This is a good test for an rwx volume. Since it replicates an append
only log that is used by multiple pods.
Signed-off-by: Joshua Moody <joshua.moody@rancher.com>
So that on a delete & recreate of the service the previous pv's still
point to this nfs-provisioner. We cannot use the hostname since the actual
host doesn't know how to resolve service addresses inside of the cluster.
To support this would require the installation of kube-dns and
modification to the /etc/resolve.conf file on each host.
Signed-off-by: Joshua Moody <joshua.moody@rancher.com>
This makes the nfs client use a new src port for each tcp reconnect.
This way after a crash the faulty connection isn't kept alive in the
connection cache (nat). This should allow to resolve the cluster ip to
the new destination pod ip.
Signed-off-by: Joshua Moody <joshua.moody@rancher.com>
Add tolerations so that nfs provisioner pod gets evicted from a failing
node after 60 second + 30 grace period (relevant for va recovery policy).
Add liveness + readyness probe, so that no traffic gets routed to a failed
nfs server. Disable device based fsids (major:minor) since our block device
mapping can change from node to node, which makes the id's unstable.
Signed-off-by: Joshua Moody <joshua.moody@rancher.com>