代码之家  ›  专栏  ›  技术社区  ›  StefanSchubert

kubeadm join失败http://localhost:10248/healthz连接被拒绝

  •  2
  • StefanSchubert  · 技术社区  · 6 年前

    我正在尝试在三个虚拟机上设置kubernetes(来自centos7的教程), 不幸的是,工人的加入失败了。我希望有人已经有这个问题(发现它在网上两次没有答案),或者可能有一个猜测出了什么问题。

    以下是kubeadm join带给我的:

    [preflight] running pre-flight checks
            [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}]
    you can solve this problem with following methods:
     1. Run 'modprobe -- ' to load missing kernel modules;
    2. Provide the missing builtin kernel ipvs support
    
    I0902 20:31:15.401693    2032 kernel_validator.go:81] Validating kernel version
    I0902 20:31:15.401768    2032 kernel_validator.go:96] Validating kernel config
            [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.06.1-ce. Max validated version: 17.03
    [discovery] Trying to connect to API Server "192.168.1.30:6443"
    [discovery] Created cluster-info discovery client, requesting info from "https://192.168.1.30:6443"
    [discovery] Requesting info from "https://192.168.1.30:6443" again to validate TLS against the pinned public key
    [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.1.30:6443"
    [discovery] Successfully established connection with API Server "192.168.1.30:6443"
    [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace
    [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [preflight] Activating the kubelet service
    [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
    [kubelet-check] It seems like the kubelet isn't running or healthy.
    [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
    [kubelet-check] It seems like the kubelet isn't running or healthy.
    

    尽管库布莱特在跑步:

    [root@k8s-worker1 nodesetup]# systemctl status kubelet -l
    ● kubelet.service - kubelet: The Kubernetes Node Agent
       Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
      Drop-In: /etc/systemd/system/kubelet.service.d
               └─10-kubeadm.conf
       Active: active (running) since So 2018-09-02 20:31:15 CEST; 19min ago
         Docs: https://kubernetes.io/docs/
     Main PID: 2093 (kubelet)
        Tasks: 7
       Memory: 12.1M
       CGroup: /system.slice/kubelet.service
               └─2093 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver=cgroupfs --cni-bin-dir=/opt/cni/bin --cni-conf-dir=/etc/cni/net.d --network-plugin=cni
    
    Sep 02 20:31:15 k8s-worker1 systemd[1]: Started kubelet: The Kubernetes Node Agent.
    Sep 02 20:31:15 k8s-worker1 systemd[1]: Starting kubelet: The Kubernetes Node Agent...
    Sep 02 20:31:15 k8s-worker1 kubelet[2093]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
    Sep 02 20:31:15 k8s-worker1 kubelet[2093]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
    Sep 02 20:31:16 k8s-worker1 kubelet[2093]: I0902 20:31:16.440010    2093 server.go:408] Version: v1.11.2
    Sep 02 20:31:16 k8s-worker1 kubelet[2093]: I0902 20:31:16.440314    2093 plugins.go:97] No cloud provider specified.
    [root@k8s-worker1 nodesetup]# 
    

    据我所知,worker可以连接到master,但它试图对某个尚未出现的本地servlet运行healthcheck。有什么想法吗?

    下面是我配置我的worker所做的:

    exec bash
    setenforce 0
    sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
    
    
    echo "Setting Firewallrules"
    firewall-cmd --permanent --add-port=10250/tcp
    firewall-cmd --permanent --add-port=10255/tcp
    firewall-cmd --permanent --add-port=30000-32767/tcp
    firewall-cmd --permanent --add-port=6783/tcp
    firewall-cmd --reload
    
    
    echo "And enable br filtering"
    modprobe br_netfilter
    echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables
    
    
    echo "disable swap"
    swapoff -a
    echo "### You need to edit /etc/fstab and comment the swapline!! ###"
    
    
    echo "Adding kubernetes repo for download"
    cat <<EOF > /etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=1
    repo_gpgcheck=1
    gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
            https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
    EOF
    
    
    echo "install the Docker-ce dependencies"
    yum install -y yum-utils device-mapper-persistent-data lvm2
    
    echo "add docker-ce repository"
    yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
    
    echo "install docker ce"
    yum install -y docker-ce
    
    echo "Install kubeadm kubelet kubectl"
    yum install kubelet kubeadm kubectl -y
    
    echo "start and enable kubectl"
    systemctl restart docker && systemctl enable docker
    systemctl restart kubelet && systemctl enable kubelet
    
    echo "Now we need to ensure that both Docker-ce and Kubernetes belong to the same control group (cgroup)"
    
    echo "We assume that docker is using cgroupfs ... assuming kubelet does so too"
    docker info | grep -i cgroup
    grep -i cgroup /var/lib/kubelet/kubeadm-flags.env
    #  old style
    # sed -i 's/cgroup-driver=systemd/cgroup-driver=cgroupfs/g' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
    
    systemctl daemon-reload
    systemctl restart kubelet
    
    # There has been an issue reported that traffic in iptable is been routed incorrectly.
    # Below settings will make sure IPTable is configured correctly.
    #
    sudo bash -c 'cat <<EOF >  /etc/sysctl.d/k8s.conf
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    EOF'
    
    # Make changes effective
    sudo sysctl --system
    

    更新I

    worker的Journalctl输出:

    [root@k8s-worker1 ~]# journalctl -xeu kubelet
    Sep 02 21:19:56 k8s-worker1 systemd[1]: Started kubelet: The Kubernetes Node Agent.
    -- Subject: Unit kubelet.service has finished start-up
    -- Defined-By: systemd
    -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
    -- 
    -- Unit kubelet.service has finished starting up.
    -- 
    -- The start-up result is done.
    Sep 02 21:19:56 k8s-worker1 systemd[1]: Starting kubelet: The Kubernetes Node Agent...
    -- Subject: Unit kubelet.service has begun start-up
    -- Defined-By: systemd
    -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
    -- 
    -- Unit kubelet.service has begun starting up.
    Sep 02 21:19:56 k8s-worker1 kubelet[3082]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --confi
    Sep 02 21:19:56 k8s-worker1 kubelet[3082]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --confi
    Sep 02 21:19:56 k8s-worker1 kubelet[3082]: I0902 21:19:56.788059    3082 server.go:408] Version: v1.11.2
    Sep 02 21:19:56 k8s-worker1 kubelet[3082]: I0902 21:19:56.788214    3082 plugins.go:97] No cloud provider specified.
    Sep 02 21:19:56 k8s-worker1 kubelet[3082]: F0902 21:19:56.814469    3082 server.go:262] failed to run Kubelet: cannot create certificate signing request: Unauthorized
    Sep 02 21:19:56 k8s-worker1 systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
    Sep 02 21:19:56 k8s-worker1 systemd[1]: Unit kubelet.service entered failed state.
    Sep 02 21:19:56 k8s-worker1 systemd[1]: kubelet.service failed.
    

    主机端的get pods会导致:

    [root@k8s-master ~]# kubectl get pods --all-namespaces=true
    NAMESPACE     NAME                                 READY     STATUS    RESTARTS   AGE
    kube-system   coredns-78fcdf6894-79n2m             0/1       Pending   0          1d
    kube-system   coredns-78fcdf6894-tlngr             0/1       Pending   0          1d
    kube-system   etcd-k8s-master                      1/1       Running   3          1d
    kube-system   kube-apiserver-k8s-master            1/1       Running   0          1d
    kube-system   kube-controller-manager-k8s-master   0/1       Evicted   0          1d
    kube-system   kube-proxy-2x8cx                     1/1       Running   3          1d
    kube-system   kube-scheduler-k8s-master            1/1       Running   0          1d
    [root@k8s-master ~]# 
    

    更新II 下一步,我在主机端生成了一个新的令牌,并在join命令中使用了这个令牌。尽管主令牌列表显示该令牌为有效令牌,但工作节点坚持认为主令牌不知道该令牌或该令牌已过期….停止!是时候从头开始了,从主设置开始。

    下面是我所做的:

    2) 安装了新映像后,基本接口enp0s3未配置为在引导时运行(因此ifup enp03s,并在/etc/sysconfig/network script中重新配置为在引导时运行)。

    3) 为内部kubernetes网络配置第二个接口:

    /etc/主机:

    #!/bin/sh
    echo '192.168.1.30 k8s-master' >> /etc/hosts
    echo '192.168.1.40 k8s-worker1' >> /etc/hosts
    echo '192.168.1.50 k8s-worker2' >> /etc/hosts
    

    通过“ip-color-human addr”识别了我的第二个接口,它显示了我的案例中的enp0S8,因此:

    #!/bin/sh
    echo "Setting up internal Interface"
    cat <<EOF > /etc/sysconfig/network-scripts/ifcfg-enp0s8
    DEVICE=enp0s8
    IPADDR=192.168.1.30
    NETMASK=255.255.255.0
    NETWORK=192.168.1.0
    BROADCAST=192.168.1.255
    ONBOOT=yes
    NAME=enp0s8
    EOF
    
    echo "Activate interface"
    ifup enp0s8
    

    4) 主机名,交换,禁用SELinux

    #!/bin/sh
    echo "Setting hostname und deactivate SELinux"
    hostnamectl set-hostname 'k8s-master'
    exec bash
    setenforce 0
    sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
    
    echo "disable swap"
    swapoff -a
    
    echo "### You need to edit /etc/fstab and comment the swapline!! ###"
    

    5) 建立防火墙设置

    #!/bin/sh
    echo "Setting Firewallrules"
    firewall-cmd --permanent --add-port=6443/tcp
    firewall-cmd --permanent --add-port=2379-2380/tcp
    firewall-cmd --permanent --add-port=10250/tcp
    firewall-cmd --permanent --add-port=10251/tcp
    firewall-cmd --permanent --add-port=10252/tcp
    firewall-cmd --permanent --add-port=10255/tcp
    firewall-cmd --reload
    
    echo "And enable br filtering"
    modprobe br_netfilter
    echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables
    

    #!/bin/sh
    echo "Adding kubernetes repo for download"
    cat <<EOF > /etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=1
    repo_gpgcheck=1
    gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
            https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
    EOF
    

    7) 安装所需的软件包并配置服务

    #!/bin/sh
    
    echo "install the Docker-ce dependencies"
    yum install -y yum-utils device-mapper-persistent-data lvm2
    
    echo "add docker-ce repository"
    yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
    
    echo "install docker ce"
    yum install -y docker-ce
    
    echo "Install kubeadm kubelet kubectl"
    yum install kubelet kubeadm kubectl -y
    
    echo "start and enable kubectl"
    systemctl restart docker && systemctl enable docker
    systemctl restart kubelet && systemctl enable kubelet
    
    echo "Now we need to ensure that both Docker-ce and Kubernetes belong to the same control group (cgroup)"
    echo "We assume that docker is using cgroupfs ... assuming kubelet does so too"
    docker info | grep -i cgroup
    grep -i cgroup /var/lib/kubelet/kubeadm-flags.env
    #  old style
    # sed -i 's/cgroup-driver=systemd/cgroup-driver=cgroupfs/g' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
    
    systemctl daemon-reload
    systemctl restart kubelet
    
    # There has been an issue reported that traffic in iptable is been routed incorrectly.
    # Below settings will make sure IPTable is configured correctly.
    #
    sudo bash -c 'cat <<EOF >  /etc/sysctl.d/k8s.conf
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    EOF'
    
    # Make changes effective
    sudo sysctl --system
    

    8) 初始化群集

    #!/bin/sh
    echo "Init kubernetes. Check join cmd in initProtocol.txt"
    kubeadm init --apiserver-advertise-address=192.168.1.30 --pod-network-cidr=192.168.1.0/16 | tee initProtocol.txt
    

    Init kubernetes. Check join cmd in initProtocol.txt
    [init] using Kubernetes version: v1.11.2
    [preflight] running pre-flight checks
            [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
    I0904 21:53:15.271999    1526 kernel_validator.go:81] Validating kernel version
    I0904 21:53:15.272165    1526 kernel_validator.go:96] Validating kernel config
            [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.06.1-ce. Max validated version: 17.03
    [preflight/images] Pulling images required for setting up a Kubernetes cluster
    [preflight/images] This might take a minute or two, depending on the speed of your internet connection
    [preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
    [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [preflight] Activating the kubelet service
    [certificates] Generated ca certificate and key.
    [certificates] Generated apiserver certificate and key.
    [certificates] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.30]
    [certificates] Generated apiserver-kubelet-client certificate and key.
    [certificates] Generated sa key and public key.
    [certificates] Generated front-proxy-ca certificate and key.
    [certificates] Generated front-proxy-client certificate and key.
    [certificates] Generated etcd/ca certificate and key.
    [certificates] Generated etcd/server certificate and key.
    [certificates] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [127.0.0.1 ::1]
    [certificates] Generated etcd/peer certificate and key.
    [certificates] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.1.30 127.0.0.1 ::1]
    [certificates] Generated etcd/healthcheck-client certificate and key.
    [certificates] Generated apiserver-etcd-client certificate and key.
    [certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
    [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
    [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
    [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
    [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
    [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
    [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
    [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
    [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
    [init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" 
    [init] this might take a minute or longer if the control plane images have to be pulled
    [apiclient] All control plane components are healthy after 43.504792 seconds
    [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
    [kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster
    [markmaster] Marking the node k8s-master as master by adding the label "node-role.kubernetes.io/master=''"
    [markmaster] Marking the node k8s-master as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
    [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-master" as an annotation
    [bootstraptoken] using token: n4yt3r.3c8tuj11nwszts2d
    [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
    [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
    [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
    [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
    [addons] Applied essential addon: CoreDNS
    [addons] Applied essential addon: kube-proxy
    
    Your Kubernetes master has initialized successfully!
    
    To start using your cluster, you need to run the following as a regular user:
    
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
    You can now join any number of machines by running the following on each node
    as root:
    
      kubeadm join 192.168.1.30:6443 --token n4yt3r.3c8tuj11nwszts2d --discovery-token-ca-cert-hash sha256:466e7972a4b6997651ac1197fdde68d325a7bc41f2fccc2b1efc17515af61172
    

    备注:到目前为止,我看起来还不错,不过我有点担心最新的docker ce版本可能会给这里带来麻烦。。。

    9) 部署pod网络

    #!/bin/bash
    
    echo "Configure demo cluster usage as root"
    mkdir -p $HOME/.kube
    cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    chown $(id -u):$(id -g) $HOME/.kube/config
    
    # Deploy-Network using flanel
    # Taken from first matching two tutorials on the web
    # kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
    # kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml
    
    # taken from https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#pod-network
    kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml
    kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/c5d10c8/Documentation/kube-flannel.yml
    
    echo "Try to run kubectl get pods --all-namespaces"
    echo "After joining nodes: try to run kubectl get nodes to verify the status"
    

    Configure demo cluster usage as root
    clusterrole.rbac.authorization.k8s.io/flannel created
    clusterrolebinding.rbac.authorization.k8s.io/flannel created
    serviceaccount/flannel created
    configmap/kube-flannel-cfg created
    daemonset.extensions/kube-flannel-ds created
    clusterrole.rbac.authorization.k8s.io/flannel configured
    clusterrolebinding.rbac.authorization.k8s.io/flannel configured
    serviceaccount/flannel unchanged
    configmap/kube-flannel-cfg unchanged
    daemonset.extensions/kube-flannel-ds-amd64 created
    daemonset.extensions/kube-flannel-ds-arm64 created
    daemonset.extensions/kube-flannel-ds-arm created
    daemonset.extensions/kube-flannel-ds-ppc64le created
    daemonset.extensions/kube-flannel-ds-s390x created
    Try to run kubectl get pods --all-namespaces
    After joining nodes: try to run kubectl get nodes to verify the status
    

    所以我试着让kubectl得到pods——所有的名称空间,我得到

    [root@k8s-master nodesetup]# kubectl get pods --all-namespaces
    NAMESPACE     NAME                                 READY     STATUS    RESTARTS   AGE
    kube-system   coredns-78fcdf6894-pflhc             0/1       Pending   0          33m
    kube-system   coredns-78fcdf6894-w7dxg             0/1       Pending   0          33m
    kube-system   etcd-k8s-master                      1/1       Running   0          27m
    kube-system   kube-apiserver-k8s-master            1/1       Running   0          27m
    kube-system   kube-controller-manager-k8s-master   0/1       Evicted   0          27m
    kube-system   kube-proxy-stfxm                     1/1       Running   0          28m
    kube-system   kube-scheduler-k8s-master            1/1       Running   0          27m
    

    [root@k8s-master nodesetup]# kubectl get nodes
    NAME         STATUS     ROLES     AGE       VERSION
    k8s-master   NotReady   master    35m       v1.11.2
    

    一些观察结果:

    一开始我运行kubectl时,连接被拒绝了,我发现建立服务需要几分钟的时间。但正因为如此,我在/var/log/firewalld中找到了很多:

    2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t nat -D PREROUTING' failed: iptables: Bad rule (does a matching rule exist in that chain?).
    
    2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t nat -D OUTPUT' failed: iptables: Bad rule (does a matching rule exist in that chain?).
    
    2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t nat -F DOCKER' failed: iptables: No chain/target/match by that name.
    
    2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t nat -X DOCKER' failed: iptables: No chain/target/match by that name.
    
    2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -F DOCKER' failed: iptables: No chain/target/match by that name.
    
    2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -X DOCKER' failed: iptables: No chain/target/match by that name.
    
    2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -F DOCKER-ISOLATION-STAGE-1' failed: iptables: No chain/target/match by that name.
    
    2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -X DOCKER-ISOLATION-STAGE-1' failed: iptables: No chain/target/match by that name.
    
    2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -F DOCKER-ISOLATION-STAGE-2' failed: iptables: No chain/target/match by that name.
    
    2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -X DOCKER-ISOLATION-STAGE-2' failed: iptables: No chain/target/match by that name.
    
    2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -F DOCKER-ISOLATION' failed: iptables: No chain/target/match by that name.
    
    2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -X DOCKER-ISOLATION' failed: iptables: No chain/target/match by that name.
    
    2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t nat -n -L DOCKER' failed: iptables: No chain/target/match by that name.
    
    2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -n -L DOCKER' failed: iptables: No chain/target/match by that name.
    
    2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -n -L DOCKER-ISOLATION-STAGE-1' failed: iptables: No chain/target/match by that name.
    
    2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -n -L DOCKER-ISOLATION-STAGE-2' failed: iptables: No chain/target/match by that name.
    
    2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -C DOCKER-ISOLATION-STAGE-1 -j RETURN' failed: iptables: Bad rule (does a matching rule exist in that chain?).
    

    错误的docker版本?docker安装程序似乎已损坏。

    还有什么我能查到的吗。。。 很晚了-明天我要再次加入我的员工(在初始代币期的24小时范围内)。

    [root@k8s-master ~]# kubectl get pods --all-namespaces=true
    NAMESPACE     NAME                                 READY     STATUS    RESTARTS   AGE
    kube-system   coredns-78fcdf6894-pflhc             0/1       Pending   0          10h
    kube-system   coredns-78fcdf6894-w7dxg             0/1       Pending   0          10h
    kube-system   etcd-k8s-master                      1/1       Running   0          10h
    kube-system   kube-apiserver-k8s-master            1/1       Running   0          10h
    kube-system   kube-controller-manager-k8s-master   1/1       Running   0          10h
    kube-system   kube-flannel-ds-amd64-crljm          0/1       Pending   0          1s
    kube-system   kube-flannel-ds-v6gcx                0/1       Pending   0          0s
    kube-system   kube-proxy-l2dck                     0/1       Pending   0          0s
    kube-system   kube-scheduler-k8s-master            1/1       Running   0          10h
    [root@k8s-master ~]# 
    

    [root@k8s-master ~]# kubectl get nodes
    NAME         STATUS    ROLES     AGE       VERSION
    k8s-master   Ready     master    10h       v1.11.2
    [root@k8s-master ~]# 
    

    请继续关注…下班后,我也在修复worker上的docker/防火墙,并将再次尝试加入集群(现在知道如何在需要时发布新令牌)。所以第四次更新将在10小时内完成

    1 回复  |  直到 6 年前
        1
  •  1
  •   Nick_Kh    6 年前

    看来你的 kubeadm token 已根据过期 kubelet 日志已附加。

    9月2日21:19:56 k8s-worker1 kubelet[3082]:F0902 21:19:56.814469
    签名请求:未授权

    此令牌的TTL在命令发出后24小时保持不变 kubeadm init 释放,检查这个 link

    主节点系统运行时组件看起来不正常,不确定集群是否可以正常运行。尽管 CoreDNS 服务处于挂起状态,请查看 kubeadm 故障排除 document 为了检查 Pod network 提供程序已安装在群集上。

    我建议重建集群以便刷新 kubeadm代币 从零开始引导集群系统模块。