kubeadm init 的前提需要保证kubelet正常启动吗

来源:1-3 【核心基本功】K8S基础集群搭建

guaguaerhao

2022-01-24 09:23:19

kubeadm init 的时候失败


我使用使用命令 journalctl -u kubelet 查看kubelet的状态:

[root@VM-16-14-centos ~]# journalctl -u kubelet

-- Logs begin at Sun 2022-01-23 14:47:41 CST, end at Mon 2022-01-24 08:53:51 CST. --

-- Logs begin at Sun 2022-01-23 14:47:41 CST, end at Mon 2022-01-24 08:53:51 CST. --

Jan 24 01:37:45 VM-16-14-centos systemd[1]: Started kubelet: The Kubernetes Node Agent.

Jan 24 01:37:45 VM-16-14-centos kubelet[27333]: E0124 01:37:45.711901   27333 server.go:205] "Failed to load kubelet config file" err="failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kube

Jan 24 01:37:45 VM-16-14-centos systemd[1]: kubelet.service: main process exited, code=exited, status=1/FAILURE

Jan 24 01:37:45 VM-16-14-centos systemd[1]: Unit kubelet.service entered failed state.

Jan 24 01:37:45 VM-16-14-centos systemd[1]: kubelet.service failed.

Jan 24 01:37:55 VM-16-14-centos systemd[1]: kubelet.service holdoff time over, scheduling restart.

Jan 24 01:37:55 VM-16-14-centos systemd[1]: Stopped kubelet: The Kubernetes Node Agent.



kubeadm init --apiserver-advertise-address=111.229.11.21 --pod-network-cidr=10.244.0.0/16 的日志

[root@VM-16-14-centos ~]# kubeadm init --apiserver-advertise-address=111.229.11.21 --pod-network-cidr=10.244.0.0/16

[init] Using Kubernetes version: v1.23.2

[preflight] Running pre-flight checks

[preflight] Pulling images required for setting up a Kubernetes cluster

[preflight] This might take a minute or two, depending on the speed of your internet connection

[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'

[certs] Using certificateDir folder "/etc/kubernetes/pki"

[certs] Generating "ca" certificate and key

[certs] Generating "apiserver" certificate and key

[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local vm-16-14-centos] and IPs [10.96.0.1 111.229.11.215]

[certs] Generating "apiserver-kubelet-client" certificate and key

[certs] Generating "front-proxy-ca" certificate and key

[certs] Generating "front-proxy-client" certificate and key

[certs] Generating "etcd/ca" certificate and key

[certs] Generating "etcd/server" certificate and key

[certs] etcd/server serving cert is signed for DNS names [localhost vm-16-14-centos] and IPs [111.229.11.215 127.0.0.1 ::1]

[certs] Generating "etcd/peer" certificate and key

[certs] etcd/peer serving cert is signed for DNS names [localhost vm-16-14-centos] and IPs [111.229.11.215 127.0.0.1 ::1]

[certs] Generating "etcd/healthcheck-client" certificate and key

[certs] Generating "apiserver-etcd-client" certificate and key

[certs] Generating "sa" key and public key

[kubeconfig] Using kubeconfig folder "/etc/kubernetes"

[kubeconfig] Writing "admin.conf" kubeconfig file

[kubeconfig] Writing "kubelet.conf" kubeconfig file

[kubeconfig] Writing "controller-manager.conf" kubeconfig file

[kubeconfig] Writing "scheduler.conf" kubeconfig file

[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"

[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"

[kubelet-start] Starting the kubelet

[control-plane] Using manifest folder "/etc/kubernetes/manifests"

[control-plane] Creating static Pod manifest for "kube-apiserver"

[control-plane] Creating static Pod manifest for "kube-controller-manager"

[control-plane] Creating static Pod manifest for "kube-scheduler"

[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"

[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s

[kubelet-check] Initial timeout of 40s passed.

[kubelet-check] It seems like the kubelet isn't running or healthy.

[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

[kubelet-check] It seems like the kubelet isn't running or healthy.

[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

[kubelet-check] It seems like the kubelet isn't running or healthy.

[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

[kubelet-check] It seems like the kubelet isn't running or healthy.

[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

[kubelet-check] It seems like the kubelet isn't running or healthy.

[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.


        Unfortunately, an error has occurred:

                timed out waiting for the condition


        This error is likely caused by:

                - The kubelet is not running

                - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)


        If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:

                - 'systemctl status kubelet'

                - 'journalctl -xeu kubelet'


        Additionally, a control plane component may have crashed or exited when started by the container runtime.

        To troubleshoot, list all containers using your preferred container runtimes CLI.


        Here is one example how you may list all Kubernetes containers running in docker:

                - 'docker ps -a | grep kube | grep -v pause'

                Once you have found the failing container, you can inspect its logs with:

                - 'docker logs CONTAINERID'


error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster

To see the stack trace of this error execute with --v=5 or higher

[root@VM-16-14-centos ~]# 


写回答

1回答

张飞扬

2022-03-24

Kubetlet的问题通常和Docker配置,Docker版本和K8S版本兼容性等有关,可以重装下新版本Docker再重新k8S init试试

0

Java架构师-技术专家

千万级电商项目从0到100全过程,覆盖Java程序员不同成长阶段的核心问题与解决方案

2671 学习 · 5839 问题

查看课程