Я создаю развертывание / обслуживание сервера nfs в моем кластере k8s
apiVersion: v1
kind: Service
metadata:
name: nfs
labels:
app: nfs
spec:
selector:
app: nfs
ports:
- name: tcp-2049
port: 2049
protocol: TCP
- name: rpcbind
port: 111
protocol: UDP
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pv-claim
labels:
app: nfs
spec:
storageClassName: gp2
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: nfs
labels:
app: nfs
spec:
replicas: 1
template:
metadata:
labels:
app: nfs
spec:
containers:
- name: nfs
image: itsthenetwork/nfs-server-alpine
env:
- name: SHARED_DIRECTORY
value: /data
securityContext:
privileged: true
ports:
- containerPort: 2049
name: tcp-2049
- containerPort: 111
name: rpcbind
volumeMounts:
- name: nfs-persistent-storage
mountPath: /data
volumes:
- name: nfs-persistent-storage
persistentVolumeClaim:
claimName: nfs-pv-claim
Я развертываю его в своем промежуточном пространстве имен
kubectl -n staging apply -f k8s.yml
Вроде работает найти, это стартовые журналы
Writing SHARED_DIRECTORY to /etc/exports file
The PERMITTED environment variable is unset or null, defaulting to '*'.
This means any client can mount.
The READ_ONLY environment variable is unset or null, defaulting to 'rw'.
Clients have read/write access.
The SYNC environment variable is unset or null, defaulting to 'async' mode.
Writes will not be immediately written to disk.
Displaying /etc/exports contents:
/data *(rw,fsid=0,async,no_subtree_check,no_auth_nlm,insecure,no_root_squash)
Starting rpcbind...
Displaying rpcbind status...
program version netid address service owner
100000 4 tcp6 ::.0.111 - superuser
100000 3 tcp6 ::.0.111 - superuser
100000 4 udp6 ::.0.111 - superuser
100000 3 udp6 ::.0.111 - superuser
100000 4 tcp 0.0.0.0.0.111 - superuser
100000 3 tcp 0.0.0.0.0.111 - superuser
100000 2 tcp 0.0.0.0.0.111 - superuser
100000 4 udp 0.0.0.0.0.111 - superuser
100000 3 udp 0.0.0.0.0.111 - superuser
100000 2 udp 0.0.0.0.0.111 - superuser
100000 4 local /var/run/rpcbind.sock - superuser
100000 3 local /var/run/rpcbind.sock - superuser
Starting NFS in the background...
rpc.nfsd: knfsd is currently down
rpc.nfsd: Writing version string to kernel: -2 -3 +4 +4.1 +4.2
rpc.nfsd: Created AF_INET TCP socket.
rpc.nfsd: Created AF_INET6 TCP socket.
Exporting File System...
exporting *:/data
/data <world>
Starting Mountd in the background...These
Startup successful.
Я пытаюсь смонтировать развертывание nginx, чтобы поделиться своими сертификатами
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
namespace: staging
labels:
app: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
volumes:
- name: shared-certs
nfs:
server: nfs.staging
path: /data/nginx-certs
containers:
- name: nginx
image: ...
imagePullPolicy: Always
ports:
- containerPort: 80
name: http
protocol: TCP
- containerPort: 443
name: https
protocol: TCP
volumeMounts:
- name: shared-certs
mountPath: /etc/ssl
---
apiVersion: v1
kind: Service
metadata:
name: nginx
namespace: staging
labels:
app: nginx
spec:
selector:
app: nginx
ports:
- name: http
port: 80
protocol: TCP
- name: https
port: 443
protocol: TCP
type: LoadBalancer
У меня текущая ошибка
MountVolume.SetUp failed for volume "shared-certs" : mount failed: exit status 32 Mounting command: systemd-run Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/26fc9bb0-da04-11e9-b286-02b0d58bd308/volumes/kubernetes.io~nfs/shared-certs --scope -- mount -t nfs nfs.staging:/data/nginx-certs /var/lib/kubelet/pods/26fc9bb0-da04-11e9-b286-02b0d58bd308/volumes/kubernetes.io~nfs/shared-certs Output: Running scope as unit: run-r010f6444b58c475ab4c2dcf247f52615.scope mount.nfs: Failed to resolve server nfs.staging: Name or service not known
когда я подключаюсь к модулю nginx по ssh и пингуюсь nfs.staging
хост решен