Tungsten Fabric实战:基于K8s的部署踩坑(下)



  • Tungsten Fabric实战:基于K8s的部署踩坑(上)

    重新部署

    • 下定决心,重新部署1-master/2-node的k8s场景,还是使用之前的deployer
    • 记录
    [root@deployer contrail-ansible-deployer]# cat install_k8s_3node.log 
    ...
    PLAY RECAP **********************************************************************************************************************************************************************************************************************************
    192.168.122.116            : ok=31   changed=15   unreachable=0    failed=0   
    192.168.122.146            : ok=23   changed=8    unreachable=0    failed=0   
    192.168.122.204            : ok=23   changed=8    unreachable=0    failed=0   
    localhost                  : ok=62   changed=4    unreachable=0    failed=0  
    
    [root@deployer contrail-ansible-deployer]# cat install_contrail_3node.log
    ...
    PLAY RECAP **********************************************************************************************************************************************************************************************************************************
    192.168.122.116            : ok=76   changed=45   unreachable=0    failed=0   
    192.168.122.146            : ok=37   changed=17   unreachable=0    failed=0   
    192.168.122.204            : ok=37   changed=17   unreachable=0    failed=0   
    localhost                  : ok=66   changed=4    unreachable=0    failed=0
    

    发现新的master的状态是NotReady,查看状态

    [root@master02 ~]# systemctl status kubelet
    ● kubelet.service - kubelet: The Kubernetes Node Agent
       Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
      Drop-In: /usr/lib/systemd/system/kubelet.service.d
               └─10-kubeadm.conf
       Active: active (running) since 三 2020-03-18 16:04:35 +08; 32min ago
         Docs: https://kubernetes.io/docs/
     Main PID: 18801 (kubelet)
        Tasks: 20
       Memory: 60.3M
       CGroup: /system.slice/kubelet.service
               └─18801 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver=cgroupfs --network-plugin=cni
    
    3月 18 16:36:51 master02 kubelet[18801]: W0318 16:36:51.929447   18801 cni.go:188] Unable to update cni config: No networks found in /etc/cni/net.d
    3月 18 16:36:51 master02 kubelet[18801]: E0318 16:36:51.929572   18801 kubelet.go:2167] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready...fig uninitialized
    3月 18 16:36:56 master02 kubelet[18801]: W0318 16:36:56.930736   18801 cni.go:188] Unable to update cni config: No networks found in /etc/cni/net.d
    

    发现master上确实没有 /etc/cni/net.d这个目录,所以将node02的拷贝过来

    [root@master02 ~]# mkdir -p /etc/cni/net.d/
    [root@master02 ~]# scp root@192.168.122.146:/etc/cni/net.d/10-contrail.conf /etc/cni/net.d/10-contrail.conf
    
    [root@master02 ~]# systemctl restart kubelet
    

    问题解决

    [root@master02 ~]# kubectl get node
    NAME                    STATUS   ROLES    AGE   VERSION
    localhost.localdomain   Ready       35m   v1.12.9
    master02                Ready    master   35m   v1.12.9
    node03                  Ready       35m   v1.12.9
    [root@master02 ~]#
    
    • 如果用一个deployer部署两套环境,打开web的时候会提示
      2efb2920-ac70-4100-902f-5e0d569ae7ca-image.png

    • 解决方法参考这里https://support.mozilla.org/en-US/kb/Certificate-contains-the-same-serial-number-as-another-certificate

    579f9eaf-bbe2-4196-a77f-91387e90908e-image.png

    • pod状态正常了
    [root@master02 ~]# kubectl get pods -n kube-system -o wide
    NAME                                    READY   STATUS    RESTARTS   AGE   IP                NODE                    NOMINATED NODE
    coredns-85c98899b4-4vgk4                1/1     Running   0          69m   10.47.255.252     node03                  
    coredns-85c98899b4-thpz6                1/1     Running   0          69m   10.47.255.251     localhost.localdomain   
    etcd-master02                           1/1     Running   0          55m   192.168.122.116   master02                
    kube-apiserver-master02                 1/1     Running   0          55m   192.168.122.116   master02                
    kube-controller-manager-master02        1/1     Running   0          55m   192.168.122.116   master02                
    kube-proxy-6sp2n                        1/1     Running   0          69m   192.168.122.116   master02                
    kube-proxy-8gpgd                        1/1     Running   0          69m   192.168.122.204   node03                  
    kube-proxy-wtvhd                        1/1     Running   0          69m   192.168.122.146   localhost.localdomain   
    kube-scheduler-master02                 1/1     Running   0          55m   192.168.122.116   master02                
    kubernetes-dashboard-76456c6d4b-9s6vc   1/1     Running   0          69m   192.168.122.204   node03                  
    [root@master02 ~]#
    

Log in to reply