2017-08-23 6 views
0

Lorsque j'utilisais un kubeadm auto-hébergé dans ubuntu, je ne pouvais pas accéder aux autres modules et au réseau externe depuis le module k8s mais je pouvais accéder à l'aide de conteneurs docker standard.erreur kubeadm kubedns. impossible d'accéder à un réseau externe ou à d'autres modules

J'ai essayé avec différents types de réseau de pod comprenant le calicot, l'armure et la flanelle.

J'ai suivi les instructinos de débogage de here sans aucun succès, ci-dessous sont les journaux.

$ kubectl exec -ti busybox -- nslookup kubernetes.default 
Server: 10.96.0.10 
Address 1: 10.96.0.10 

nslookup: can't resolve 'kubernetes.default' 


$ kubectl exec busybox cat /etc/resolv.conf 
nameserver 10.96.0.10 
search default.svc.cluster.local svc.cluster.local cluster.local 
options ndots:5 


$ kubectl get pods --namespace=kube-system -l k8s-app=kube-dns 
NAME      READY  STATUS RESTARTS AGE 
kube-dns-2425271678-9zwtd 3/3  Running 0   12m 


$ kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name) -c kubedns 
I0823 16:02:58.407162  6 dns.go:48] version: 1.14.3-4-gee838f6 
I0823 16:02:58.408957  6 server.go:70] Using configuration read from directory: /kube-dns-config with period 10s 
I0823 16:02:58.409223  6 server.go:113] FLAG: --alsologtostderr="false" 
I0823 16:02:58.409248  6 server.go:113] FLAG: --config-dir="/kube-dns-config" 
I0823 16:02:58.409288  6 server.go:113] FLAG: --config-map="" 
I0823 16:02:58.409301  6 server.go:113] FLAG: --config-map-namespace="kube-system" 
I0823 16:02:58.409309  6 server.go:113] FLAG: --config-period="10s" 
I0823 16:02:58.409325  6 server.go:113] FLAG: --dns-bind-address="0.0.0.0" 
I0823 16:02:58.409333  6 server.go:113] FLAG: --dns-port="10053" 
I0823 16:02:58.409370  6 server.go:113] FLAG: --domain="cluster.local." 
I0823 16:02:58.409387  6 server.go:113] FLAG: --federations="" 
I0823 16:02:58.409401  6 server.go:113] FLAG: --healthz-port="8081" 
I0823 16:02:58.409411  6 server.go:113] FLAG: --initial-sync-timeout="1m0s" 
I0823 16:02:58.409434  6 server.go:113] FLAG: --kube-master-url="" 
I0823 16:02:58.409451  6 server.go:113] FLAG: --kubecfg-file="" 
I0823 16:02:58.409458  6 server.go:113] FLAG: --log-backtrace-at=":0" 
I0823 16:02:58.409470  6 server.go:113] FLAG: --log-dir="" 
I0823 16:02:58.409478  6 server.go:113] FLAG: --log-flush-frequency="5s" 
I0823 16:02:58.409489  6 server.go:113] FLAG: --logtostderr="true" 
I0823 16:02:58.409496  6 server.go:113] FLAG: --nameservers="" 
I0823 16:02:58.409521  6 server.go:113] FLAG: --stderrthreshold="2" 
I0823 16:02:58.409533  6 server.go:113] FLAG: --v="2" 
I0823 16:02:58.409544  6 server.go:113] FLAG: --version="false" 
I0823 16:02:58.409559  6 server.go:113] FLAG: --vmodule="" 
I0823 16:02:58.409728  6 server.go:176] Starting SkyDNS server (0.0.0.0:10053) 
I0823 16:02:58.467505  6 server.go:198] Skydns metrics enabled (/metrics:10055) 
I0823 16:02:58.467640  6 dns.go:147] Starting endpointsController 
I0823 16:02:58.467810  6 dns.go:150] Starting serviceController 
I0823 16:02:58.557166  6 logs.go:41] skydns: ready for queries on cluster.local. for tcp://0.0.0.0:10053 [rcache 0] 
I0823 16:02:58.557335  6 logs.go:41] skydns: ready for queries on cluster.local. for udp://0.0.0.0:10053 [rcache 0] 
I0823 16:02:58.968454  6 dns.go:174] Waiting for services and endpoints to be initialized from apiserver... 
I0823 16:02:59.468406  6 dns.go:171] Initialized services and endpoints from apiserver 
I0823 16:02:59.468698  6 server.go:129] Setting up Healthz Handler (/readiness) 
I0823 16:02:59.469064  6 server.go:134] Setting up cache handler (/cache) 
I0823 16:02:59.469305  6 server.go:120] Status HTTP port 8081 


$ kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name) -c dnsmasq 
I0823 16:02:59.445525  11 main.go:76] opts: {{/usr/sbin/dnsmasq [-k --cache-size=1000 --log-facility=- --server=/cluster.local/127.0.0.1#10053 --server=/in-addr.arpa/127.0.0.1#10053 --server=/ip6.arpa/127.0.0.1#10053] true} /etc/k8s/dns/dnsmasq-nanny 10000000000} 
I0823 16:02:59.445741  11 nanny.go:86] Starting dnsmasq [-k --cache-size=1000 --log-facility=- --server=/cluster.local/127.0.0.1#10053 --server=/in-addr.arpa/127.0.0.1#10053 --server=/ip6.arpa/127.0.0.1#10053] 
I0823 16:02:59.820424  11 nanny.go:108] dnsmasq[38]: started, version 2.76 cachesize 1000 
I0823 16:02:59.820546  11 nanny.go:108] dnsmasq[38]: compile time options: IPv6 GNU-getopt no-DBus no-i18n no-IDN DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth no-DNSSEC loop-detect inotify 
I0823 16:02:59.820596  11 nanny.go:108] dnsmasq[38]: using nameserver 127.0.0.1#10053 for domain ip6.arpa 
I0823 16:02:59.820623  11 nanny.go:108] dnsmasq[38]: using nameserver 127.0.0.1#10053 for domain in-addr.arpa 
I0823 16:02:59.820659  11 nanny.go:108] dnsmasq[38]: using nameserver 127.0.0.1#10053 for domain cluster.local 
I0823 16:02:59.820736  11 nanny.go:108] dnsmasq[38]: reading /etc/resolv.conf 
I0823 16:02:59.820762  11 nanny.go:108] dnsmasq[38]: using nameserver 127.0.0.1#10053 for domain ip6.arpa 
I0823 16:02:59.820788  11 nanny.go:108] dnsmasq[38]: using nameserver 127.0.0.1#10053 for domain in-addr.arpa 
I0823 16:02:59.820825  11 nanny.go:108] dnsmasq[38]: using nameserver 127.0.0.1#10053 for domain cluster.local 
I0823 16:02:59.820850  11 nanny.go:108] dnsmasq[38]: using nameserver 8.8.8.8#53 
I0823 16:02:59.820928  11 nanny.go:108] dnsmasq[38]: read /etc/hosts - 7 addresses 
I0823 16:02:59.821193  11 nanny.go:111] 
W0823 16:02:59.821212  11 nanny.go:112] Got EOF from stdout 

$ kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name) -c sidecar 
ERROR: logging before flag.Parse: I0823 16:03:00.789793  26 main.go:48] Version v1.14.3-4-gee838f6 
ERROR: logging before flag.Parse: I0823 16:03:00.790052  26 server.go:45] Starting server (options {DnsMasqPort:53 DnsMasqAddr:127.0.0.1 DnsMasqPollIntervalMs:5000 Probes:[{Label:kubedns Server:127.0.0.1:10053 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:1} {Label:dnsmasq Server:127.0.0.1:53 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:1}] PrometheusAddr:0.0.0.0 PrometheusPort:10054 PrometheusPath:/metrics PrometheusNamespace:kubedns}) 
ERROR: logging before flag.Parse: I0823 16:03:00.790121  26 dnsprobe.go:75] Starting dnsProbe {Label:kubedns Server:127.0.0.1:10053 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:1} 
ERROR: logging before flag.Parse: I0823 16:03:00.790419  26 dnsprobe.go:75] Starting dnsProbe {Label:dnsmasq Server:127.0.0.1:53 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:1} 

Vous trouverez ci-dessous le fichier etc/resolv.conf du maître.

$ cat /etc/resolv.conf 
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8) 
#  DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN 
nameserver 8.8.8.8 

$ kubeadm version 
kubeadm version: &version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.3", GitCommit:"2c2fe6e8278a5db2d15a013987b53968c743f2a1", GitTreeState:"clean", BuildDate:"2017-08-03T06:43:48Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"} 

est en dessous du etc/resolv.conf du nœud de travail où la nacelle est en cours d'exécution

# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8) 
#  DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN 
nameserver 8.8.4.4 
nameserver 8.8.8. 

Voici la sortie de sudo iptables -L -n

Chain INPUT (policy ACCEPT) 
target  prot opt source    destination   
cali-INPUT all -- 0.0.0.0/0   0.0.0.0/0   /* cali:Cz_u1IQiXIMmKD4c */ 
KUBE-SERVICES all -- 0.0.0.0/0   0.0.0.0/0   /* kubernetes service portals */ 
KUBE-FIREWALL all -- 0.0.0.0/0   0.0.0.0/0   

Chain FORWARD (policy DROP) 
target  prot opt source    destination   
cali-FORWARD all -- 0.0.0.0/0   0.0.0.0/0   /* cali:wUHhoiAYhphO9Mso */ 
DOCKER-USER all -- 0.0.0.0/0   0.0.0.0/0   
DOCKER-ISOLATION all -- 0.0.0.0/0   0.0.0.0/0   
ACCEPT  all -- 0.0.0.0/0   0.0.0.0/0   ctstate RELATED,ESTABLISHED 
DOCKER  all -- 0.0.0.0/0   0.0.0.0/0   
ACCEPT  all -- 0.0.0.0/0   0.0.0.0/0   
ACCEPT  all -- 0.0.0.0/0   0.0.0.0/0   
WEAVE-NPC all -- 0.0.0.0/0   0.0.0.0/0   
NFLOG  all -- 0.0.0.0/0   0.0.0.0/0   state NEW nflog-group 86 
DROP  all -- 0.0.0.0/0   0.0.0.0/0   
ACCEPT  all -- 0.0.0.0/0   0.0.0.0/0   
ACCEPT  all -- 0.0.0.0/0   0.0.0.0/0   ctstate RELATED,ESTABLISHED 

Chain OUTPUT (policy ACCEPT) 
target  prot opt source    destination   
cali-OUTPUT all -- 0.0.0.0/0   0.0.0.0/0   /* cali:tVnHkvAo15HuiPy0 */ 
KUBE-SERVICES all -- 0.0.0.0/0   0.0.0.0/0   /* kubernetes service portals */ 
KUBE-FIREWALL all -- 0.0.0.0/0   0.0.0.0/0   

Chain DOCKER (1 references) 
target  prot opt source    destination   

Chain DOCKER-ISOLATION (1 references) 
target  prot opt source    destination   
RETURN  all -- 0.0.0.0/0   0.0.0.0/0   

Chain DOCKER-USER (1 references) 
target  prot opt source    destination   
RETURN  all -- 0.0.0.0/0   0.0.0.0/0   

Chain KUBE-FIREWALL (2 references) 
target  prot opt source    destination   
DROP  all -- 0.0.0.0/0   0.0.0.0/0   /* kubernetes firewall for dropping marked packets */ mark match 0x8000/0x8000 

Chain KUBE-SERVICES (2 references) 
target  prot opt source    destination   
REJECT  tcp -- 0.0.0.0/0   10.96.252.131  /* default/redis-cache-service:redis has no endpoints */ tcp dpt:6379 reject-with icmp-port-unreachable 
REJECT  tcp -- 0.0.0.0/0   10.96.252.131  /* default/redis-cache-service:cluster has no endpoints */ tcp dpt:16379 reject-with icmp-port-unreachable 
REJECT  tcp -- 0.0.0.0/0   10.105.180.126  /* default/redis-pubsub-service:redis has no endpoints */ tcp dpt:6379 reject-with icmp-port-unreachable 
REJECT  tcp -- 0.0.0.0/0   10.105.180.126  /* default/redis-pubsub-service:cluster has no endpoints */ tcp dpt:16379 reject-with icmp-port-unreachable 

Chain WEAVE-NPC (1 references) 
target  prot opt source    destination   
ACCEPT  all -- 0.0.0.0/0   0.0.0.0/0   state RELATED,ESTABLISHED 
ACCEPT  all -- 0.0.0.0/0   224.0.0.0/4   
WEAVE-NPC-DEFAULT all -- 0.0.0.0/0   0.0.0.0/0   state NEW 
WEAVE-NPC-INGRESS all -- 0.0.0.0/0   0.0.0.0/0   state NEW 
ACCEPT  all -- 0.0.0.0/0   0.0.0.0/0   ! match-set weave-local-pods dst 

Chain WEAVE-NPC-DEFAULT (1 references) 
target  prot opt source    destination   
ACCEPT  all -- 0.0.0.0/0   0.0.0.0/0   match-set weave-k?Z;25^M}|1s7P3|H9i;*;MhG dst 
ACCEPT  all -- 0.0.0.0/0   0.0.0.0/0   match-set weave-iuZcey(5DeXbzgRFs8Szo][email protected] dst 
ACCEPT  all -- 0.0.0.0/0   0.0.0.0/0   match-set weave-4vtqMI+kx/2]jD%_c0S%thO%V dst 

Chain WEAVE-NPC-INGRESS (1 references) 
target  prot opt source    destination   

Chain cali-FORWARD (1 references) 
target  prot opt source    destination   
cali-from-wl-dispatch all -- 0.0.0.0/0   0.0.0.0/0   /* cali:X3vB2lGcBrfkYquC */ 
cali-to-wl-dispatch all -- 0.0.0.0/0   0.0.0.0/0   /* cali:UtJ9FnhBnFbyQMvU */ 
ACCEPT  all -- 0.0.0.0/0   0.0.0.0/0   /* cali:Tt19HcSdA5YIGSsw */ 
ACCEPT  all -- 0.0.0.0/0   0.0.0.0/0   /* cali:9LzfFCvnpC5_MYXm */ 
MARK  all -- 0.0.0.0/0   0.0.0.0/0   /* cali:7AofLLOqCM5j36rM */ MARK and 0xf1ffffff 
cali-from-host-endpoint all -- 0.0.0.0/0   0.0.0.0/0   /* cali:QM1_joSl7tL76Az7 */ mark match 0x0/0x1000000 
cali-to-host-endpoint all -- 0.0.0.0/0   0.0.0.0/0   /* cali:C1QSog3bk0AykjAO */ 
ACCEPT  all -- 0.0.0.0/0   0.0.0.0/0   /* cali:DmFiPAmzcisqZcvo */ /* Host endpoint policy accepted packet. */ mark match 0x1000000/0x1000000 

Chain cali-INPUT (1 references) 
target  prot opt source    destination   
ACCEPT  all -- 0.0.0.0/0   0.0.0.0/0   /* cali:i7okJZpS8VxaJB3n */ mark match 0x1000000/0x1000000 
DROP  4 -- 0.0.0.0/0   0.0.0.0/0   /* cali:p8Wwvr6qydjU36AQ */ /* Drop IPIP packets from non-Calico hosts */ ! match-set cali4-all-hosts src 
cali-wl-to-host all -- 0.0.0.0/0   0.0.0.0/0   [goto] /* cali:QZT4Ptg57_76nGng */ 
MARK  all -- 0.0.0.0/0   0.0.0.0/0   /* cali:V0Veitpvpl5h1xwi */ MARK and 0xf0ffffff 
cali-from-host-endpoint all -- 0.0.0.0/0   0.0.0.0/0   /* cali:3R1g0cpvSoBlKzVr */ 
ACCEPT  all -- 0.0.0.0/0   0.0.0.0/0   /* cali:efXx-pqD4s60WsDL */ /* Host endpoint policy accepted packet. */ mark match 0x1000000/0x1000000 

Chain cali-OUTPUT (1 references) 
target  prot opt source    destination   
ACCEPT  all -- 0.0.0.0/0   0.0.0.0/0   /* cali:YQSSJIsRcHjFbXaI */ mark match 0x1000000/0x1000000 
RETURN  all -- 0.0.0.0/0   0.0.0.0/0   /* cali:KRjBsKsBcFBYKCEw */ 
MARK  all -- 0.0.0.0/0   0.0.0.0/0   /* cali:3VKAQBcyUUW5kS_j */ MARK and 0xf0ffffff 
cali-to-host-endpoint all -- 0.0.0.0/0   0.0.0.0/0   /* cali:Z1mBCSH1XHM6qq0k */ 
ACCEPT  all -- 0.0.0.0/0   0.0.0.0/0   /* cali:N0jyWt2RfBedKw3L */ /* Host endpoint policy accepted packet. */ mark match 0x1000000/0x1000000 

Chain cali-failsafe-in (0 references) 
target  prot opt source    destination   
ACCEPT  tcp -- 0.0.0.0/0   0.0.0.0/0   /* cali:wWFQM43tJU7wwnFZ */ multiport dports 22 
ACCEPT  udp -- 0.0.0.0/0   0.0.0.0/0   /* cali:LwNV--R8MjeUYacw */ multiport dports 68 

Chain cali-failsafe-out (0 references) 
target  prot opt source    destination   
ACCEPT  tcp -- 0.0.0.0/0   0.0.0.0/0   /* cali:73bZKoyDfOpFwC2T */ multiport dports 2379 
ACCEPT  tcp -- 0.0.0.0/0   0.0.0.0/0   /* cali:QMFuWo6o-d9yOpNm */ multiport dports 2380 
ACCEPT  tcp -- 0.0.0.0/0   0.0.0.0/0   /* cali:Kup7QkrsdmfGX0uL */ multiport dports 4001 
ACCEPT  tcp -- 0.0.0.0/0   0.0.0.0/0   /* cali:xYYr5PEqDf_Pqfkv */ multiport dports 7001 
ACCEPT  udp -- 0.0.0.0/0   0.0.0.0/0   /* cali:nbWBvu4OtudVY60Q */ multiport dports 53 
ACCEPT  udp -- 0.0.0.0/0   0.0.0.0/0   /* cali:UxFu5cDK5En6dT3Y */ multiport dports 67 

Chain cali-from-host-endpoint (2 references) 
target  prot opt source    destination   

Chain cali-from-wl-dispatch (2 references) 
target  prot opt source    destination   
DROP  all -- 0.0.0.0/0   0.0.0.0/0   /* cali:zTj6P0TIgYvgz-md */ /* Unknown interface */ 

Chain cali-to-host-endpoint (2 references) 
target  prot opt source    destination   

Chain cali-to-wl-dispatch (1 references) 
target  prot opt source    destination   
DROP  all -- 0.0.0.0/0   0.0.0.0/0   /* cali:7KNphB1nNHw80nIO */ /* Unknown interface */ 

Chain cali-wl-to-host (1 references) 
target  prot opt source    destination   
ACCEPT  udp -- 0.0.0.0/0   0.0.0.0/0   /* cali:aEOMPPLgak2S0Lxs */ multiport sports 68 multiport dports 67 
ACCEPT  udp -- 0.0.0.0/0   0.0.0.0/0   /* cali:SzR8ejPiuXtFMS8B */ multiport dports 53 
cali-from-wl-dispatch all -- 0.0.0.0/0   0.0.0.0/0   /* cali:MEmlbCdco0Fefcrw */ 
ACCEPT  all -- 0.0.0.0/0   0.0.0.0/0   /* cali:LZBoXHDOlr3ok4R3 */ /* Configured DefaultEndpointToHostAction */ 
+0

Éteindre tous les conteneurs, fermer les kubernetes et le service docker. Ensuite, exécutez 'sudo iptables -n -L' et ajoutez le journal de sortie à votre question –

+0

@TarunLalwani mis à jour la sortie iptables – anandaravindan

+0

Avez-vous installé votre CNI? –

Répondre

0

Peut-être est votre iptable a l'ancien enregistrement, parce que je pense que vous utilisez réinitialisation kubeadm ou le réglage peut-être réseau overlay avant, S'il vous plaît faire cette chose après kubeadm réinitialiser et retirer docker

vérifier votre lien ip ip link et nettoyer l'ancien record

ip link delete cni0 ip link delete flannel.1 (s'il existe d'autres armure réseau, par exemple, s'il vous plaît le supprimer)

propre, puis les iptalbes

iptables -P INPUT ACCEPT iptables -P FORWARD ACCEPT iptables -P OUTPUT ACCEPT iptables -t nat -F iptables -t mangle -F iptables -F iptables -X

puis réinstallez docker et kubernetes, que le pod doit pouvoir obtenir le exte rnal network

Bonne chance!