Я пытаюсь установить следующую платформу в Google Cloud:
2 частных (собственных) кластера GKE в 2 разных VPC и для предоставления им доступа к Интернету каждый vpc имеет настроенный Cloud Nat.
Мне нужно, чтобы 2 кластера GKE взаимодействовали, но при пиринге VPC я получаю связь только между POD, а не между POD -> Service или POD -> внутренним балансировщиком нагрузки.
Кластеры:
NAME LOCATION MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS
Shrek01 asia-east1-a 1.16.8-gke.15 <none> g1-small 1.16.8-gke.15 3 RUNNING
Shrek02 asia-east2-a 1.15.9-gke.24 <none> g1-small 1.15.9-gke.24 3 RUNNING
vpcs:
NAME SUBNET_MODE BGP_ROUTING_MODE IPV4_RANGE GATEWAY_IPV4
Shrek01 CUSTOM REGIONAL
Shrek02 CUSTOM REGIONAL
подсети:
NAME REGION NETWORK RANGE
Shrek01 asia-east1 Shrek01 192.168.13.0/24
Shrek02 asia-east2 Shrek02 192.168.14.0/24
сверстники:
NAME NETWORK PEER_PROJECT PEER_NETWORK AUTO_CREATE_ROUTES STATE STATE_DETAILS
Shrek01-Shrek01-peering Shrek01 pocprod2-2019001 Shrek02 True ACTIVE [2020-05-16T14:29:57.864-07:00]: Connected.
Shrek02-Shrek01-peering Shrek02 pocprod2-2019001 Shrek01 True ACTIVE [2020-05-16T14:29:57.864-07:00]: Connected.
правила межсетевых экранов:
{
"allowed": [
{
"IPProtocol": "all"
}
],
"creationTimestamp": "2020-05-16T16:05:14.829-07:00",
"description": "",
"direction": "INGRESS",
"disabled": false,
"id": "6807007164648771397",
"kind": "compute#firewall",
"logConfig": {
"enable": false
},
"name": "peering-ingress",
"network": "https://www.googleapis.com/compute/v1/projects/pocprod2-2019001/global/networks/Shrek01",
"priority": 1000,
"selfLink": "https://www.googleapis.com/compute/v1/projects/pocprod2-2019001/global/firewalls/peering-ingress",
"sourceRanges": [
"192.168.14.0/24",
"10.113.64.0/19",
"10.213.64.0/19"
]
}
{
"allowed": [
{
"IPProtocol": "all"
}
],
"creationTimestamp": "2020-05-16T16:24:28.545-07:00",
"description": "",
"direction": "INGRESS",
"disabled": false,
"id": "7130188648920500419",
"kind": "compute#firewall",
"logConfig": {
"enable": false
},
"name": "Shrek02-peering-ingress",
"network": "https://www.googleapis.com/compute/v1/projects/pocprod2-2019001/global/networks/Shrek02",
"priority": 1000,
"selfLink": "https://www.googleapis.com/compute/v1/projects/pocprod2-2019001/global/firewalls/Shrek02-peering-ingress",
"sourceRanges": [
"192.168.13.0/24",
"10.113.32.0/19",
"10.213.32.0/19"
]
}
Кластер k8s Shrek01:
kubectl get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes ClusterIP 10.213.32.1 <none> 443/TCP 85m <none>
nginx LoadBalancer 10.213.60.14 192.168.13.7 80:32612/TCP 92s app=nginx
nginx-cip ClusterIP 10.213.34.24 <none> 80/TCP 93s app=nginx
nginx-np NodePort 10.213.35.31 <none> 80:30444/TCP 92s app=nginx
kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-64b4f9bb85-9sjcp 1/1 Running 0 3m34s 10.113.34.11 gke-Shrek01-default-pool-f9ecbfcc-dz9z <none> <none>
nginx-64b4f9bb85-l2bzd 1/1 Running 0 3m34s 10.113.32.5 gke-Shrek01-default-pool-f9ecbfcc-pdll <none> <none>
nginx-64b4f9bb85-xd7kw 1/1 Running 0 3m34s 10.113.33.9 gke-Shrek01-default-pool-f9ecbfcc-v67d <none> <none>
kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
gke-Shrek01-default-pool-f9ecbfcc-dz9z Ready <none> 89m v1.16.8-gke.15 192.168.13.4 Container-Optimized OS from Google 4.19.109+ docker://19.3.1
gke-Shrek01-default-pool-f9ecbfcc-pdll Ready <none> 89m v1.16.8-gke.15 192.168.13.2 Container-Optimized OS from Google 4.19.109+ docker://19.3.1
gke-Shrek01-default-pool-f9ecbfcc-v67d Ready <none> 89m v1.16.8-gke.15 192.168.13.3 Container-Optimized OS from Google 4.19.109+ docker://19.3.1
root@nginx-5c66c56f55-8jwv2:/# echo ${MY_POD_IP}
10.113.66.9
# internal load balancer
root@nginx-5c66c56f55-8jwv2:/# nc -vz 192.168.13.7 80
192.168.13.7: inverse host lookup failed: Unknown host
(UNKNOWN) [192.168.13.7] 80 (?) : Connection timed out
# intarnal load balancer's Cluster IP
root@nginx-5c66c56f55-8jwv2:/# nc -vz 10.213.60.14 80
10.213.60.14: inverse host lookup failed: Unknown host
(UNKNOWN) [10.213.60.14] 80 (?) : Connection timed out
# ClusterIP
root@nginx-5c66c56f55-8jwv2:/# nc -vz 10.213.34.24 80
10.213.34.24: inverse host lookup failed: Unknown host
(UNKNOWN) [10.213.34.24] 80 (?) : Connection timed out
# NodePort
root@nginx-5c66c56f55-8jwv2:/# nc -vz 10.213.35.31 80
10.213.35.31: inverse host lookup failed: Unknown host
(UNKNOWN) [10.213.35.31] 80 (?) : Connection timed out
# Pod IP
root@nginx-5c66c56f55-8jwv2:/# nc -vz 10.113.34.11 80
10.113.34.11: inverse host lookup failed: Unknown host
(UNKNOWN) [10.113.34.11] 80 (?) open
root@nginx-5c66c56f55-8jwv2:/# nc -vz 10.113.32.5 80
10.113.32.5: inverse host lookup failed: Unknown host
(UNKNOWN) [10.113.32.5] 80 (?) open
root@nginx-5c66c56f55-8jwv2:/# nc -vz 10.113.33.9 80
10.113.33.9: inverse host lookup failed: Unknown host
(UNKNOWN) [10.113.33.9] 80 (?) open
Я забыл шаг? Я не нашел ошибки.
Внутренний балансировщик нагрузки разрешен только в пиринге VPC при следующих обстоятельствах:
Чтобы использовать внутренний глобальный доступ с пирингом VPC из разных регионов, у вас есть два варианта:
$ gcloud compute forwarding-rules update <LB_NAME> \
--region=<REGION> \
--allow-global-access
gcloud compute forwarding-rules describe <LB_NAME> \
--region=us-west1 \
--format="get(name,region,allowGlobalAccess)"
НОТА:
ClusterIP
: Предоставляет Сервис на внутреннем IP-адресе кластера. При выборе этого значения Служба доступен только изнутри кластера. Таким образом, вы не будете перенаправлены на внешний доступ.
Воспроизведение:
$ gcloud container clusters list
NAME LOCATION MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS
shrek01 europe-west1-b 1.16.8-gke.15 XX.XXX.XX.XXX g1-small 1.16.8-gke.15 3 RUNNING
shrek02 europe-west2-b 1.15.9-gke.24 XXX.XXX.XX.XXX g1-small 1.15.9-gke.24 3 RUNNING
$ gcloud compute networks subnets list
NAME REGION NETWORK RANGE
shrek01 europe-west1 shrek01 192.168.13.0/24
shrek02 europe-west2 shrek02 192.168.14.0/24
$ gcloud compute networks peerings list-routes sh1-sh2 --network=shrek01 --region europe-west1 --direction=INCOMING
DEST_RANGE TYPE NEXT_HOP_REGION PRIORITY STATUS
192.168.14.0/24 SUBNET_PEERING_ROUTE europe-west2 1000 accepted
10.229.0.0/20 SUBNET_PEERING_ROUTE europe-west2 1000 accepted
10.36.0.0/14 SUBNET_PEERING_ROUTE europe-west2 1000 accepted
$ gcloud compute networks peerings list-routes sh2-sh1 --network=shrek02 --region europe-west2 --direction=INCOMING
DEST_RANGE TYPE NEXT_HOP_REGION PRIORITY STATUS
192.168.13.0/24 SUBNET_PEERING_ROUTE europe-west1 1000 accepted
10.154.0.0/20 SUBNET_PEERING_ROUTE europe-west1 1000 accepted
10.24.0.0/14 SUBNET_PEERING_ROUTE europe-west1 1000 accepted
Убедившись, что мои узлы могут пинговать между VPC, я протестирую вход и соединения с этим yamls
hello-1.yaml
:apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-1
spec:
replicas: 3
selector:
matchLabels:
app: hello-1
template:
metadata:
labels:
app: hello-1
spec:
containers:
- name: hello-1
image: gcr.io/google-samples/hello-app:1.0
ports:
- name: http
containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: hello-1-svc
spec:
type: NodePort
selector:
app: hello-1
ports:
- protocol: TCP
port: 80
targetPort: 8080
hello-2.yaml
:apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-2
spec:
replicas: 3
selector:
matchLabels:
app: hello-2
template:
metadata:
labels:
app: hello-2
spec:
containers:
- name: hello-2
image: gcr.io/google-samples/hello-app:2.0
ports:
- name: http
containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: hello-2-svc
spec:
type: NodePort
selector:
app: hello-2
ports:
- protocol: TCP
port: 80
targetPort: 8080
hello-ingress.yaml
:apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: hello-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host:
http:
paths:
- path: /
backend:
serviceName: hello-1-svc
servicePort: 80
- path: /v2
backend:
serviceName: hello-2-svc
servicePort: 80
o$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
hello-1-84d5994678-dx8dv 1/1 Running 0 140m 10.24.0.9 gke-shrek01-default-pool-5ffc38d7-bz35 <none> <none>
hello-1-84d5994678-t74mn 1/1 Running 0 14m 10.24.1.3 gke-shrek01-default-pool-5ffc38d7-70sk <none> <none>
hello-1-84d5994678-zq7t2 1/1 Running 0 14m 10.24.2.9 gke-shrek01-default-pool-5ffc38d7-zfj6 <none> <none>
hello-2-5c4f554ccc-b8j6f 1/1 Running 0 140m 10.24.0.10 gke-shrek01-default-pool-5ffc38d7-bz35 <none> <none>
hello-2-5c4f554ccc-km4ph 1/1 Running 0 13m 10.24.1.4 gke-shrek01-default-pool-5ffc38d7-70sk <none> <none>
hello-2-5c4f554ccc-z4f6n 1/1 Running 0 13m 10.24.2.10 gke-shrek01-default-pool-5ffc38d7-zfj6 <none> <none>
$ ubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-1-svc NodePort 10.154.13.186 <none> 80:32186/TCP 140m
hello-2-svc NodePort 10.154.4.214 <none> 80:32450/TCP 140m
$ kubectl get svc ingress-nginx-controller -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.154.10.104 192.168.13.5 80:30112/TCP,443:32156/TCP 4h20m
$ kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
hello-ingress * 192.168.13.5 80 98m
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
gke-shrek01-default-pool-5ffc38d7-70sk Ready <none> 2d19h v1.16.8-gke.15 192.168.13.3 XX.XXX.XX.XXX Container-Optimized OS from Google 4.19.109+ docker://19.3.1
gke-shrek01-default-pool-5ffc38d7-bz35 Ready <none> 2d19h v1.16.8-gke.15 192.168.13.2 XXX.XXX.XX.XXX Container-Optimized OS from Google 4.19.109+ docker://19.3.1
gke-shrek01-default-pool-5ffc38d7-zfj6 Ready <none> 2d19h v1.16.8-gke.15 192.168.13.4 XX.XXX.X.XXX Container-Optimized OS from Google 4.19.109+ docker://19.3.1
Сейчас подключусь к shrek02
кластер, создайте под и установите curl
:
project@cloudshell:~$ kubectl run ubuntu --image=ubuntu -it -- /bin/bash
root@ubuntu:/# apt update
root@ubuntu:/# apt install curl
root@ubuntu:/# exit
project@cloudshell:~$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
ubuntu 1/1 Running 1 2m51s 10.36.1.6 gke-shrek02-default-pool-a7a08ac8-0lrz <none> <none>
shrek02
, теперь давайте проверим подключение к shrek01
Ресурсы. Помни это kube-dns
доступны только внутри кластера, поэтому мы будем подключаться по IP:project@cloudshell:~$ kubectl exec -it ubuntu -- /bin/bash
###Hello-1 POD:
root@ubuntu:/# curl 10.24.0.9:8080
Hello, world!
Version: 1.0.0
Hostname: hello-1-84d5994678-dx8dv
###Hello-2 POD:
root@ubuntu:/# curl 10.24.1.4:8080
Hello, world!
Version: 2.0.0
Hostname: hello-2-5c4f554ccc-km4ph
### HELLO-1-SVC USING NODE IP + NODEPORT:
root@ubuntu:/# curl 192.168.13.3:32186
Hello, world!
Version: 1.0.0
Hostname: hello-1-84d5994678-t74mn
### HELLO-2-SVC USING ANOTHER NODE IP + NODEPORT:
root@ubuntu:/# curl 192.168.13.2:32450
Hello, world!
Version: 2.0.0
Hostname: hello-2-5c4f554ccc-km4ph
### NOW LET'S TEST OUR INGRESS which routes "/" to hello-1 and "/v2" to hello-2:
root@ubuntu:/# curl 192.168.13.5/
Hello, world!
Version: 1.0.0
Hostname: hello-1-84d5994678-dx8dv
root@ubuntu:/# curl 192.168.13.5/v2
Hello, world!
Version: 2.0.0
Hostname: hello-2-5c4f554ccc-b8j6f
Я надеюсь, что это поможет вам устранить неполадки в вашей среде, и если у вас есть какие-либо вопросы, дайте мне знать в комментариях.