canal ready 0

来源:1-3 【核心基本功】K8S基础集群搭建

uareRight

2021-01-16 18:52:15

http://img.mukewang.com/climg/6002c54a095e373107900068.jpg

canal ready 0 一直不变成1

写回答

5回答

uareRight

提问者

2021-01-17

问题已解决

0

uareRight

提问者

2021-01-16

​[root@master ~]# kubectl describe pod canal-tkbr7 --namespace=kube-system

Name:         canal-tkbr7

Namespace:    kube-system

Priority:     0

Node:         master/192.168.20.172

Start Time:   Sat, 16 Jan 2021 18:59:28 +0800

Labels:       controller-revision-hash=9ccd97b67

              k8s-app=canal

              pod-template-generation=1

Annotations:  scheduler.alpha.kubernetes.io/critical-pod: 

Status:       Running

IP:           192.168.20.172

IPs:

  IP:           192.168.20.172

Controlled By:  DaemonSet/canal

Containers:

  calico-node:

    Container ID:   docker://8fdcb162370beb4a1fb71fb184a7f8db7ebc58f75633b56dbe7159b25a139bbe

    Image:          quay.io/calico/node:v3.1.7

    Image ID:       docker-pullable://quay.io/calico/node@sha256:07c9871851f07ab8b777d574d0b4396a03cd496be13c6d53b64bf517e2673362

    Port:           <none>

    Host Port:      <none>

    State:          Waiting

      Reason:       CrashLoopBackOff

    Last State:     Terminated

      Reason:       Error

      Exit Code:    1

      Started:      Sat, 16 Jan 2021 19:15:52 +0800

      Finished:     Sat, 16 Jan 2021 19:15:52 +0800

    Ready:          False

    Restart Count:  10

    Requests:

      cpu:      250m

    Liveness:   http-get http://:9099/liveness delay=10s timeout=1s period=10s #success=1 #failure=6

    Readiness:  http-get http://:9099/readiness delay=0s timeout=1s period=10s #success=1 #failure=3

    Environment:

      DATASTORE_TYPE:                     kubernetes

      FELIX_LOGSEVERITYSCREEN:            info

      CALICO_NETWORKING_BACKEND:          none

      CLUSTER_TYPE:                       k8s,canal

      CALICO_DISABLE_FILE_LOGGING:        true

      FELIX_IPTABLESREFRESHINTERVAL:      60

      FELIX_IPV6SUPPORT:                  false

      WAIT_FOR_DATASTORE:                 true

      IP:                                 

      NODENAME:                            (v1:spec.nodeName)

      FELIX_DEFAULTENDPOINTTOHOSTACTION:  ACCEPT

      FELIX_HEALTHENABLED:                true

    Mounts:

      /lib/modules from lib-modules (ro)

      /var/lib/calico from var-lib-calico (rw)

      /var/run/calico from var-run-calico (rw)

      /var/run/secrets/kubernetes.io/serviceaccount from canal-token-5gshb (ro)

  install-cni:

    Container ID:  docker://2584325df4afe053757f84157a6e388d94b92fa53177f8f00347ff62a1ae1dbf

    Image:         quay.io/calico/cni:v3.1.7

    Image ID:      docker-pullable://quay.io/calico/cni@sha256:a691efc4890aee8c41f56a6e86c638dfa5692c4207968ac0a22c43c932927dbf

    Port:          <none>

    Host Port:     <none>

    Command:

      /install-cni.sh

    State:          Running

      Started:      Sat, 16 Jan 2021 19:14:25 +0800

    Last State:     Terminated

      Reason:       Error

      Exit Code:    255

      Started:      Sat, 16 Jan 2021 19:05:29 +0800

      Finished:     Sat, 16 Jan 2021 19:11:34 +0800

    Ready:          True

    Restart Count:  2

    Environment:

      CNI_CONF_NAME:         10-calico.conflist

      CNI_NETWORK_CONFIG:    <set to the key 'cni_network_config' of config map 'canal-config'>  Optional: false

      KUBERNETES_NODE_NAME:   (v1:spec.nodeName)

    Mounts:

      /host/etc/cni/net.d from cni-net-dir (rw)

      /host/opt/cni/bin from cni-bin-dir (rw)

      /var/run/secrets/kubernetes.io/serviceaccount from canal-token-5gshb (ro)

  kube-flannel:

    Container ID:  docker://88a427c8ae882d6b85b92a5bf467b9fd6a98bd40045ec18534261528d590fbd8

    Image:         quay.io/coreos/flannel:v0.9.1

    Image ID:      docker-pullable://quay.io/coreos/flannel@sha256:60d77552f4ebb6ed4f0562876c6e2e0b0e0ab873cb01808f23f55c8adabd1f59

    Port:          <none>

    Host Port:     <none>

    Command:

      /opt/bin/flanneld

      --ip-masq

      --kube-subnet-mgr

    State:          Waiting

      Reason:       CrashLoopBackOff

    Last State:     Terminated

      Reason:       Error

      Exit Code:    1

      Started:      Sat, 16 Jan 2021 19:15:52 +0800

      Finished:     Sat, 16 Jan 2021 19:15:52 +0800

    Ready:          False

    Restart Count:  10

    Environment:

      POD_NAME:          canal-tkbr7 (v1:metadata.name)

      POD_NAMESPACE:     kube-system (v1:metadata.namespace)

      FLANNELD_IFACE:    <set to the key 'canal_iface' of config map 'canal-config'>  Optional: false

      FLANNELD_IP_MASQ:  <set to the key 'masquerade' of config map 'canal-config'>   Optional: false

    Mounts:

      /etc/kube-flannel/ from flannel-cfg (rw)

      /run from run (rw)

      /var/run/secrets/kubernetes.io/serviceaccount from canal-token-5gshb (ro)

Conditions:

  Type              Status

  Initialized       True 

  Ready             False 

  ContainersReady   False 

  PodScheduled      True 

Volumes:

  lib-modules:

    Type:          HostPath (bare host directory volume)

    Path:          /lib/modules

    HostPathType:  

  var-run-calico:

    Type:          HostPath (bare host directory volume)

    Path:          /var/run/calico

    HostPathType:  

  var-lib-calico:

    Type:          HostPath (bare host directory volume)

    Path:          /var/lib/calico

    HostPathType:  

  cni-bin-dir:

    Type:          HostPath (bare host directory volume)

    Path:          /opt/cni/bin

    HostPathType:  

  cni-net-dir:

    Type:          HostPath (bare host directory volume)

    Path:          /etc/cni/net.d

    HostPathType:  

  run:

    Type:          HostPath (bare host directory volume)

    Path:          /run

    HostPathType:  

  flannel-cfg:

    Type:      ConfigMap (a volume populated by a ConfigMap)

    Name:      canal-config

    Optional:  false

  canal-token-5gshb:

    Type:        Secret (a volume populated by a Secret)

    SecretName:  canal-token-5gshb

    Optional:    false

QoS Class:       Burstable

Node-Selectors:  <none>

Tolerations:     :NoSchedule op=Exists

                 :NoExecute op=Exists

                 CriticalAddonsOnly op=Exists

                 node.kubernetes.io/disk-pressure:NoSchedule op=Exists

                 node.kubernetes.io/memory-pressure:NoSchedule op=Exists

                 node.kubernetes.io/network-unavailable:NoSchedule op=Exists

                 node.kubernetes.io/not-ready:NoExecute op=Exists

                 node.kubernetes.io/pid-pressure:NoSchedule op=Exists

                 node.kubernetes.io/unreachable:NoExecute op=Exists

                 node.kubernetes.io/unschedulable:NoSchedule op=Exists

Events:

  Type     Reason          Age                    From               Message

  ----     ------          ----                   ----               -------

  Normal   Scheduled       17m                    default-scheduler  Successfully assigned kube-system/canal-tkbr7 to master

  Normal   Pulled          17m                    kubelet            Container image "quay.io/calico/cni:v3.1.7" already present on machine

  Normal   Created         17m                    kubelet            Created container install-cni

  Normal   Started         17m                    kubelet            Started container install-cni

  Normal   Created         17m (x2 over 17m)      kubelet            Created container kube-flannel

  Normal   Started         17m (x2 over 17m)      kubelet            Started container kube-flannel

  Warning  BackOff         17m (x3 over 17m)      kubelet            Back-off restarting failed container

  Warning  BackOff         17m (x3 over 17m)      kubelet            Back-off restarting failed container

  Normal   Pulled          17m (x3 over 17m)      kubelet            Container image "quay.io/calico/node:v3.1.7" already present on machine

  Normal   Pulled          17m (x3 over 17m)      kubelet            Container image "quay.io/coreos/flannel:v0.9.1" already present on machine

  Normal   Started         17m (x3 over 17m)      kubelet            Started container calico-node

  Normal   Created         17m (x3 over 17m)      kubelet            Created container calico-node

  Normal   SandboxChanged  11m                    kubelet            Pod sandbox changed, it will be killed and re-created.

  Warning  Failed          11m (x2 over 11m)      kubelet            Error: services have not yet been read at least once, cannot construct envvars

  Warning  Failed          11m (x2 over 11m)      kubelet            Error: services have not yet been read at least once, cannot construct envvars

  Warning  Failed          11m (x2 over 11m)      kubelet            Error: services have not yet been read at least once, cannot construct envvars

  Normal   Started         11m                    kubelet            Started container kube-flannel

  Normal   Pulled          11m (x3 over 11m)      kubelet            Container image "quay.io/coreos/flannel:v0.9.1" already present on machine

  Normal   Pulled          11m (x3 over 11m)      kubelet            Container image "quay.io/calico/cni:v3.1.7" already present on machine

  Normal   Pulled          11m (x3 over 11m)      kubelet            Container image "quay.io/calico/node:v3.1.7" already present on machine

  Normal   Started         11m                    kubelet            Started container calico-node

  Normal   Created         11m                    kubelet            Created container install-cni

  Normal   Started         11m                    kubelet            Started container install-cni

  Normal   Created         11m                    kubelet            Created container kube-flannel

  Normal   Created         11m                    kubelet            Created container calico-node

  Warning  Unhealthy       11m                    kubelet            Liveness probe failed: Get "http://192.168.20.172:9099/liveness": dial tcp 192.168.20.172:9099: connect: connection refused

  Warning  Unhealthy       11m (x2 over 11m)      kubelet            Readiness probe failed: Get "http://192.168.20.172:9099/readiness": dial tcp 192.168.20.172:9099: connect: connection refused

  Normal   SandboxChanged  2m50s                  kubelet            Pod sandbox changed, it will be killed and re-created.

  Warning  Failed          2m45s (x3 over 2m49s)  kubelet            Error: services have not yet been read at least once, cannot construct envvars

  Warning  Failed          2m45s (x3 over 2m49s)  kubelet            Error: services have not yet been read at least once, cannot construct envvars

  Normal   Pulled          2m45s (x3 over 2m49s)  kubelet            Container image "quay.io/coreos/flannel:v0.9.1" already present on machine

  Warning  Failed          2m45s (x3 over 2m49s)  kubelet            Error: services have not yet been read at least once, cannot construct envvars

  Normal   Pulled          2m32s (x4 over 2m49s)  kubelet            Container image "quay.io/calico/cni:v3.1.7" already present on machine

  Normal   Pulled          2m32s (x4 over 2m49s)  kubelet            Container image "quay.io/calico/node:v3.1.7" already present on machine

  Normal   Created         2m32s                  kubelet            Created container calico-node

  Normal   Started         2m32s                  kubelet            Started container calico-node

  Normal   Created         2m32s                  kubelet            Created container install-cni

  Normal   Started         2m32s                  kubelet            Started container install-cni



0

uareRight

提问者

2021-01-16

http://img.mukewang.com/climg/6002c7f30990b00512320767.jpghttp://img.mukewang.com/climg/6002c80809bfd97b12060772.jpghttp://img.mukewang.com/climg/6002c8ef095fea1816730644.jpghttp://img.mukewang.com/climg/6002c8dd0905308c11150821.jpghttp://img.mukewang.com/climg/6002c8ce0940055a12570823.jpg

0

uareRight

提问者

2021-01-16

http://img.mukewang.com/climg/6002c7cb09a5f80112960578.jpg

0

uareRight

提问者

2021-01-16

http://img.mukewang.com/climg/6002c7aa091dd75111930566.jpg

0

Java架构师-技术专家

千万级电商项目从0到100全过程,覆盖Java程序员不同成长阶段的核心问题与解决方案

2671 学习 · 5839 问题

查看课程