Tungsten Fabric入门宝典丨多编排器用法及配置(上)
-
Tungsten Fabric入门宝典系列文章,来自技术大牛倾囊相授的实践经验,由TF中文社区为您编译呈现,旨在帮助新手深入理解TF的运行、安装、集成、调试等全流程。如果您有相关经验或疑问,欢迎与我们互动,并与社区极客们进一步交流。更多TF技术文章,请点击公号底部按钮>学习>文章合集。 作者:Tatsuya Naganawa 译者:TF编译组
在多个编排器之间共享控制平面有很多好处,包括routing/bridging、DNS、security等。
下面我来描述每种情况的使用方法和配置。
K8s+OpenStack
Kubernetes + OpenStack的组合已经涵盖并且运行良好。
另外,Tungsten Fabric支持嵌套安装(nested installation)和非嵌套安装(non-nested installation),因此你可以选择其中一个选项。
K8s+K8s
将多个Kubernetes集群添加到一个Tungsten Fabric中,是一种可能的安装选项。
由于kube-manager支持cluster_name参数,该参数修改了将要创建的租户名称(默认为“k8s”),因此这应该是可行的。不过,我在上次尝试该方法时效果不佳,因为有些对象被其它kube-manager作为陈旧对象(stale object)删除了。
在将来的版本中,可能会更改此行为。
注意:
从R2002及更高版本开始,这个修补程序解决了该问题,并且不再需要自定义修补程序。
注意:应用这些补丁,似乎可以将多个kube-master添加到一个Tungsten Fabric集群中。
diff --git a/src/container/kube-manager/kube_manager/kube_manager.py b/src/container/kube-manager/kube_manager/kube_manager.py index 0f6f7a0..adb20a6 100644 --- a/src/container/kube-manager/kube_manager/kube_manager.py +++ b/src/container/kube-manager/kube_manager/kube_manager.py @@ -219,10 +219,10 @@ def main(args_str=None, kube_api_skip=False, event_queue=None, if args.cluster_id: client_pfx = args.cluster_id + '-' - zk_path_pfx = args.cluster_id + '/' + zk_path_pfx = args.cluster_id + '/' + args.cluster_name else: client_pfx = '' - zk_path_pfx = '' + zk_path_pfx = '' + args.cluster_name # randomize collector list args.random_collectors = args.collectors diff --git a/src/container/kube-manager/kube_manager/vnc/vnc_namespace.py b/src/container/kube-manager/kube_manager/vnc/vnc_namespace.py index 00cce81..f968cae 100644 --- a/src/container/kube-manager/kube_manager/vnc/vnc_namespace.py +++ b/src/container/kube-manager/kube_manager/vnc/vnc_namespace.py @@ -594,7 +594,8 @@ class VncNamespace(VncCommon): self._queue.put(event) def namespace_timer(self): - self._sync_namespace_project() + # self._sync_namespace_project() ## temporary disabled + pass def _get_namespace_firewall_ingress_rule_name(self, ns_name): return "-".join([vnc_kube_config.cluster_name(),
由于kube-master创建的pod-network都在同一个Tungsten Fabric controller上,因此在它们之间实现路由泄漏(route-leak)是可能的:)
- 由于cluster_name将成为Tungsten Fabric的fw-policy中的标签之一,因此在多个Kubernetes集群之间也可以使用相同的标签
172.31.9.29 Tungsten Fabric controller 172.31.22.24 kube-master1 (KUBERNETES_CLUSTER_NAME=k8s1 is set) 172.31.12.82 kube-node1 (it belongs to kube-master1) 172.31.41.5 kube-master2(KUBERNETES_CLUSTER_NAME=k8s2 is set) 172.31.4.1 kube-node2 (it belongs to kube-master2) [root@ip-172-31-22-24 ~]# kubectl get node NAME STATUS ROLES AGE VERSION ip-172-31-12-82.ap-northeast-1.compute.internal Ready 57m v1.12.3 ip-172-31-22-24.ap-northeast-1.compute.internal NotReady master 58m v1.12.3 [root@ip-172-31-22-24 ~]# [root@ip-172-31-41-5 ~]# kubectl get node NAME STATUS ROLES AGE VERSION ip-172-31-4-1.ap-northeast-1.compute.internal Ready 40m v1.12.3 ip-172-31-41-5.ap-northeast-1.compute.internal NotReady master 40m v1.12.3 [root@ip-172-31-41-5 ~]# [root@ip-172-31-22-24 ~]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE cirros-deployment-75c98888b9-7pf82 1/1 Running 0 28m 10.47.255.249 ip-172-31-12-82.ap-northeast-1.compute.internal cirros-deployment-75c98888b9-sgrc6 1/1 Running 0 28m 10.47.255.250 ip-172-31-12-82.ap-northeast-1.compute.internal cirros-vn1 1/1 Running 0 7m56s 10.0.1.3 ip-172-31-12-82.ap-northeast-1.compute.internal [root@ip-172-31-22-24 ~]# [root@ip-172-31-41-5 ~]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE cirros-deployment-75c98888b9-5lqzc 1/1 Running 0 27m 10.47.255.250 ip-172-31-4-1.ap-northeast-1.compute.internal cirros-deployment-75c98888b9-dg8bf 1/1 Running 0 27m 10.47.255.249 ip-172-31-4-1.ap-northeast-1.compute.internal cirros-vn2 1/1 Running 0 5m36s 10.0.2.3 ip-172-31-4-1.ap-northeast-1.compute.internal [root@ip-172-31-41-5 ~]# / # ping 10.0.2.3 PING 10.0.2.3 (10.0.2.3): 56 data bytes 64 bytes from 10.0.2.3: seq=83 ttl=63 time=1.333 ms 64 bytes from 10.0.2.3: seq=84 ttl=63 time=0.327 ms 64 bytes from 10.0.2.3: seq=85 ttl=63 time=0.319 ms 64 bytes from 10.0.2.3: seq=86 ttl=63 time=0.325 ms ^C --- 10.0.2.3 ping statistics --- 87 packets transmitted, 4 packets received, 95% packet loss round-trip min/avg/max = 0.319/0.576/1.333 ms / # / # ip -o a 1: lo: mtu 65536 qdisc noqueue qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever 18: eth0@if19: mtu 1500 qdisc noqueue \ link/ether 02:b9:11:c9:4c:b1 brd ff:ff:ff:ff:ff:ff 18: eth0 inet 10.0.1.3/24 scope global eth0\ valid_lft forever preferred_lft forever / # -> ping between pods, which belong to different kubernetes clusters, worked well [root@ip-172-31-9-29 ~]# ./contrail-introspect-cli/ist.py ctr route show -t default-domain:k8s1-default:vn1:vn1.inet.0 default-domain:k8s1-default:vn1:vn1.inet.0: 2 destinations, 2 routes (1 primary, 1 secondary, 0 infeasible) 10.0.1.3/32, age: 0:06:50.001343, last_modified: 2019-Jul-28 18:23:08.243656 [XMPP (interface)|ip-172-31-12-82.local] age: 0:06:50.005553, localpref: 200, nh: 172.31.12.82, encap: ['gre', 'udp'], label: 50, AS path: None 10.0.2.3/32, age: 0:02:25.188713, last_modified: 2019-Jul-28 18:27:33.056286 [XMPP (interface)|ip-172-31-4-1.local] age: 0:02:25.193517, localpref: 200, nh: 172.31.4.1, encap: ['gre', 'udp'], label: 50, AS path: None [root@ip-172-31-9-29 ~]# [root@ip-172-31-9-29 ~]# ./contrail-introspect-cli/ist.py ctr route show -t default-domain:k8s2-default:vn2:vn2.inet.0 default-domain:k8s2-default:vn2:vn2.inet.0: 2 destinations, 2 routes (1 primary, 1 secondary, 0 infeasible) 10.0.1.3/32, age: 0:02:36.482764, last_modified: 2019-Jul-28 18:27:33.055702 [XMPP (interface)|ip-172-31-12-82.local] age: 0:02:36.489419, localpref: 200, nh: 172.31.12.82, encap: ['gre', 'udp'], label: 50, AS path: None 10.0.2.3/32, age: 0:04:37.126317, last_modified: 2019-Jul-28 18:25:32.412149 [XMPP (interface)|ip-172-31-4-1.local] age: 0:04:37.133912, localpref: 200, nh: 172.31.4.1, encap: ['gre', 'udp'], label: 50, AS path: None [root@ip-172-31-9-29 ~]# -> each virtual-network in each kube-master has a route to other kube-master's pod, based on network-policy below (venv) [root@ip-172-31-9-29 ~]# contrail-api-cli --host 172.31.9.29 ls -l virtual-network virtual-network/f9d06d27-8fc1-413d-a6d6-c51c42191ac0 default-domain:k8s2-default:vn2 virtual-network/384fb3ef-247b-42e6-a628-7111fe343f90 default-domain:k8s2-default:k8s2-default-service-network virtual-network/c3098210-983b-46bc-b750-d06acfc66414 default-domain:k8s1-default:k8s1-default-pod-network virtual-network/1ff6fdbd-ac2e-4601-b08c-5f7255466312 default-domain:default-project:ip-fabric virtual-network/d8d95738-0a00-457f-b21e-60304859d1f9 default-domain:k8s2-default:k8s2-default-pod-network virtual-network/0c075b76-4219-4f79-a4f5-1b4e6729f16e default-domain:k8s1-default:k8s1-default-service-network virtual-network/985b3b5f-84b7-4810-a54d-abd09a37f525 default-domain:k8s1-default:vn1 virtual-network/23782ea7-4000-491f-b20d-01c6ab9e2ba8 default-domain:default-project:default-virtual-network virtual-network/90cce352-ef9b-4358-81b3-ef87a9cb63e8 default-domain:default-project:__link_local__ virtual-network/0292810c-c511-4147-89c0-9fdd571ccce8 default-domain:default-project:dci-network (venv) [root@ip-172-31-9-29 ~]# (venv) [root@ip-172-31-9-29 ~]# contrail-api-cli --host 172.31.9.29 ls -l network-policy network-policy/134d38b2-79e2-4a3e-a2f7-a3d61ceaf5e2 default-domain:k8s1-default:vn1-to-vn2