Group Details Private

administrators

Member List

  • 利用DDP技术提升Tungsten Fabric vRouter性能

    在刚刚结束的“2020虚拟开发人员和测试论坛”上,来自瞻博网络的工程师Kiran KN和同事,介绍了在Tungsten Fabric数据平面上完成的一组性能改进(由Intel DDP技术提供支持),以下为论坛技术分享的精华:

    作为DPDK应用的vRouter


    在深入到DDP技术之前,首先介绍一下vRouter,它是什么,以及在整个Tungsten Fabric框架中的位置。

    实际上,vRouter可以部署在常规X86服务器上,也可以在OpenStack或K8s的计算节点当中。vRouter是主要的数据平面组件,有两种部署的模式,分别是vRouter:kernel module,以及vRouter:DPDK模式。
    85b1fdbb-52b6-4392-b785-5f0a125c7aa9-image.png

    在用DPDK改善性能之前,此用例将涉及DPDK应用和vRouter。vRouter的职责是数据平面,用于数据包转发和由vRouter代理在计算节点上编程的数据包转发,但实际上,整个配置是通过控制器上的XMPP提供的。我们使用XMPP通过vRouter agent与控制器通信,并且有一个专门的接口来对vRouter数据平面进行编程以转发软件包。

    在DPDK中,vRouter是一个高性能、多核和多线程的应用程序,这里想强调一下,它是专用于多核的DPDK应用,我们需要寻找多核的正确用法。

    我们可以从示例中看到,网卡具有与vRouter相同数量的队列,已为数据包或链接分配了核心。

    首先,数据包需要由网卡平均地分配到所有路由器转发核心。为此,使用了带有5元组哈希的算法,在所有内核之间正确分配流量。而且,适当的负载平衡是基于数据包的,并且要实现该数据包需要具有5元组,多个源的目标端口IP地址。如果该协议正确,则该协议可以确定所有内核之间的流量平均分配,并且我们可以利用它分配给vRouter DPDK的所有内核的性能。尽管此流量包分布在各个处理内核上,但可以通过TX接口队列,将它们适当地放置到虚拟机中。

    如果此流量在各个内核之间没有得到适当的平衡,则vRouter将在没有内核的情况下进行重新平衡,但是对于内核来说,通过内核重新哈希化它们的代价很高,这意味着它们会消耗CPU周期并带来额外的延迟。

    这就是我们面临的问题,但在今天得到了很大程度的解决,我们可以期待网卡能够完成此任务,并适当地平衡vRouter核心之间的流量。

    91bdcc76-68c2-40f2-98ac-8adf2b9f2db5-image.png

    也就是说,具有MPLSoGRE的计算类型没有足够的熵(entropy)。由于熵的存在,我们的报头中没有足够的信息,或无法正确平衡数据包的负载。

    具体来说,数据包应该具有完整的端口信息,包含完整的5个要素,即源IP、目标IP、源端口、目标端口和协议。但是在MPLSoGRE只有源IP、目标IP和协议三元组信息。因此,网卡无法适当地平衡数据包的负载,由于该CPU内核,一对计算节点之间的所有数据包都落在瓶颈中的同一区域,这将导致网卡队列成为整个计算的瓶颈,并且性能受到影响。

    例如,假设一对计算节点之间有数千个流(flows)。理想情况下,我们希望将流平衡分布在所有的内核上,以便可以由不同的CPU将其拾取,以进行数据包处理。但是对于MPLSoGRE,已知端口的信息没有足够的熵,来自特定计算节点的所有数据包都会发生,即使有很多流量,网卡也不会将它们分发到所有队列。因此,这里不再像我们所知道的那样,实际上数据包不会像这样分散在多个内核中,它只会分配给一个内核。

    因此,尽管有很多CPU内核,但由于所有数据包必须经过C1,C1本质上成为了瓶颈。数据包无法直接流经C2、C3、C4,是因为它们没有在硬件上加载,所有其它内核必须从C1获取数据包,显然C1将会过载。

    引入DDP消除瓶颈


    我们为Tungsten Fabric数据平面引入了一项新功能,该功能消除了MPLSoGRE数据包的瓶颈,使得性能与CPU内核数成正比。这意味着没有单个CPU内核会成为瓶颈,并且网卡硬件将数据包平均分配给所有CPU内核。

    我们的解决方案由Intel DDP(dynamic device personalization)技术提供支持,使用以太网700系列的产品来提供。英特尔转向可编程管线模型后,确保它们引入了诸如固件可升级之类的功能,DDP允许在运行时动态重新配置网卡中的数据包处理管道,而无需重启服务器。软件可以将自定义配置文件应用到网卡,我们可以将这些配置文件视为附件,可以由最终用户自己构建。这些配置文件可以通过软件刷新到网卡上,以便它们可以开始在线识别和分类新的数据包类型,并将这些数据包分配到不同的Rx队列。

    这是MPLSoGRE实施的情况,首先是这是一个没有DDP提取功能的MPLSoGRE数据包,它没有获得在当前数据包中听到的内部实际存在的启动信息,因此它没有足够的信息来正确分配数据包。在第二个图例中启用了DDP,配置文件便可以开始识别内部数据包标头,以及内部IP标头和内部UDP标头,因此它可以开始使用该信息来计算哈希。

    如何使DDP成为最终用户需要为其数据包类型创建配置文件的方式?

    可以通过使用Intel的配置文件编辑器(profile editor)工具来创建,Intel在其上发布了一些标准配置文件,可以直接从Intel网站下载。配置文件编辑器可用于创建新条目,或者修改分析器的现有条目,这是第1步。第2步,为MPLSoGRE数据包创建一个新的配置文件,该配置文件在不同的层上定义数据包头的结构。第3步,是编译并创建一个二进制程序包,可以应用于网卡。第4步,我们可以使用DPDK API在每个工具接口将这些配置文件加载到网卡。接下来第5步,网卡就能够识别MPLSoGRE数据包。
    ba33b804-ce80-48c8-b55d-4a644e009364-image.png

    对性能提升的测试及确认


    接下来,我们需要测试并确认,DDP可以提供帮助带来性能提升。

    首先,我们的测试框架广泛用于开发和测试vRouter。我们使用代理方式以拥有快速概念验证的能力,并收集信息。测试vRouter性能的所有方法,包括封装和计算节点之间的封装,并且总是会包含overlay网络,就像在图例中的所看到的那样,我们通过第三个对象(rapid jump VM),该对象只有控制流量(不通过其它任何流量),向VM发送指令并收集信息。为了实现测试目标,我们采用丢包率只有0.001%的二进制搜索,使用标准的测试框架和规范。

    在vRouter中,我们可以找到脚本,显示每个内核CPU数据包处理的统计信息,以证明网卡正确地集中了所有内核之间的流量。这些统计信息来自VM0接口,这意味着连接与物理网卡之间的接口。

    在左侧,你可以看到内核1没有在处理软件包,这实际上意味着该内核未接收到任何由vRouter处理的软件包。此内核只是忙于轮询程序包,并在vRouter上的其它可用内核之间分配程序包。这意味着vRouter将成为瓶颈,因为首先需要将所有进入vRouter的流量从网卡队列中拉出,然后将其进行重新分配(跨其它核心)以转发到VM。

    而在右侧,你可以看到使用DDP的网卡已经正确分配了流量,Rx队列中所有内核之间的流量几乎相等。证明网卡完成了自己工作,并平均分配了流量。

    可以看到,是否使用DDP,在性能结果中统计数据上的差别。在不大于3个内核的情况下,使用DDP并没有得到任何好处——因为对于轮询内核和跨内核重新分配流量,目前网卡处理这样的队列还是足够快的。

    但是一旦增加内核数量,然后提高整体性能,那么网卡就成为了瓶颈——在没有DDP的情况下性能不会提高,即使增加了内核数也是如此,因为总有一个内核在拉动流量,并且你可以看到,在没有DDP的部分中6.5mpps左右是一个内核从网卡队列中轮询的最大值。

    7785c8a5-e13d-4fc6-91b8-a0262ba0695d-image.png

    随着内核的数量增加,每个内核都从网卡接收了相同数量的流量。一旦内核数量超过6个,则收益将变得更高。正如我们所看到的,具有6个内核的增益大约为73%,这确实是个很酷的数字。不仅可以提高性能,使用DDP还可以得到更好的降低延迟。这是因为我们不需要平衡内核之间的流量,也不需要计算每个数据包的哈希值。在vRouter上,这将由网卡完成,收益的增加平均为40%,最多为80%,这已经是非常棒的数字了。

    综上,对于拥有多个内核的用例,我们可以借助DDP技术获得很大的收益。另外,对于5G用例而言,DDP能够减少延迟这一点非常重要。在所有我们希望使用MPLSoGRE情况下,借助vRouter,我们已经准备好在多核中进行5G应用部署。

    【本文相关资料pdf文档下载】
    https://tungstenfabric.org.cn/assets/uploads/files/tf-vrouter-performance-improvements.pdf

    【视频链接】
    https://v.qq.com/x/page/j3108a4m1va.html

    posted in 新闻
  • 细说TF服务链丨一文讲透什么是服务链(多图)

    作者:Umberto Manferdini 译者:TF编译组

    如果你看过任何有关Tungsten Fabric(附注:原文为Contrail,在本系列文章中,Tungsten Fabric的功能与Contrail一致,文中出现Contrail之处均以Tungsten Fabric替换)的演示,可能都会碰到“服务链(service chain)”这个热词。现在,是时候对这个功能好好动手研究一番了。

    那么什么是服务链?简而言之,就是使流量在两个虚拟网络之间流动的过程中,经过一项或多项“服务”。

    让我们举个例子吧!这里有2个虚拟网络:pippo和pluto。我们希望这两个网络互相通信。在Tungsten Fabric中,只需在两个虚拟网络上配置相同的RT(route target)即可实现(虚拟网络是vrfs,还记得吗?)。(附注:Route target 即RT,在Tungsten Fabric用来作为路由的标记,也是MPLS中常用的路由更新标记。

    还有一个选择,我们可以构建一个网络策略,同时适用于两个网络,就是“允许这些虚拟网络之间的任何流量”。而在幕后,Tungsten Fabric仍然依赖于路由route target(隐藏和自动生成目标)。
    ae334598-d119-49a1-bb58-164c5830b184-image.png

    因此我们可以说,这两种方法是相同的。

    这看起来是正确的,但从根本上是错误的!使用网络策略,使我们可以指定在虚拟网络之间移动时,流量必须通过的一个或多个服务实例。这是第一种方法无法做到的。而这就是服务链!(附注:此处作者希望表达的是,尽管结果相同,但是实现的内容不一样,第一种same RT实现的内容,是不同的网络之间的路由属性相同,意味着可以相互泄露和打通,而第二个则不是。

    如前所述,服务链可以包括一项或多项服务。这意味着从pippo到pluto的流量可以穿越防火墙,也同时穿越(一个接一个)防火墙和DPI。
    f5eeb7db-897a-42b5-b059-c347bd32841e-image.png

    乍一看,有人会说:“好,很酷!但我也可以通过路由做同样的事情……”。没错,但这里真正重要的是——易于部署。Tungsten Fabric负责所有事务,并自动配置所有需要的路由。你只需要告诉Tungsten Fabric自己的意图即可:“允许这些网络进行通话,并使流量通过这些服务实例”。我们正处于基于意图的时代,不是吗?

    当然,这还不是故事的全部。Tungsten Fabric在表中引入了其它功能,例如运行状况检查(health checks),以提供高可用性和扩展能力。此外,网络策略本身也可以用于基于L4的规则拒绝/允许流量。

    那么,现在的问题是:创建服务链需要做些什么?

    让我们来仔细研究所有要素!

    首先,我们需要两个虚拟网络。无需对它们配置任何route target。
    159ddce3-01f6-4075-a70d-07b2241899ca-image.png

    接下来,我们在它们之间配置一个网络策略,允许所有流量通过:
    dca3e240-12d4-4fc3-8143-d3a507899355-image.png

    此时,两个网络可以互相通信!是时候转向服务链了。

    首先,我们创建一个虚拟机,该虚拟机将成为我们服务实例的一部分。这是两个虚拟网络(VNF)之间的流量将遍历的虚拟机!

    例如,该虚拟机可以是防火墙。该VM必须在Openstack中创建;就像通过Nova创建的任何其它VM一样。

    e547a91b-c651-4be7-9730-98327b613abe-image.png

    这是在Openstack上执行的唯一操作。

    接下来,回到Tungsten Fabric!我们创建一个名为服务模板(service template)的对象:
    0e28ef51-1d32-462a-ab7b-e123fde376ee-image.png

    顾名思义,服务模板是对服务的描述。下面五个参数是必须要配置的:

    • 版本(version),必须为v2
    • 虚拟化类型(virtualization type),除非你打算使用物理网络功能(PNF),否则必须为虚拟机
    • 服务类型(service type),可以是防火墙或分析器,我们使用第一个。
    • 服务模式(service mode),可以是transparent(bump in the wire),in-network(最常见),in-network-nat(使用nat时的特殊用例)
    • 接口列表,通常我们定义两个接口:左和右

    由于这是模板,因此可以多次用于不同VM的配置。例如,Juniper防火墙服务实例和第三方供应商防火墙服务实例都可以用服务模板进行部署。重要的是,在这两种情况下,在OpenStack中创建的虚拟机都有两个接口,可以将它们映射到服务模板中定义的接口(左和右)上。

    接下来,我们创建服务实例。在服务实例对象中可以配置很多东西。这里,我们将专注于使链条正常工作的最小配置。
    f18eb10e-ee51-49c7-a5eb-f67e0337d177-image.png

    服务实例引用服务模板。一旦指定了此引用,就可以将服务模板(左和右)中定义的接口映射到实际的虚拟网络。例如,在这种情况下,我们向左映射到fourcade,向右映射到wierer。

    现在,我们引入一个关键对象:端口元组(port tuple)。它是引用虚拟机接口的元组。如前所述,充当防火墙的实际VM不是由Tungsten Fabric定义的,而是像OpenStack中的任何其它VM一样所创建的。但是,我们需要将该虚拟机“链接”到我们的服务实例。这是通过端口元组实现的。“链接”在虚拟机接口(vmi)级别执行。

    在这种情况下,端口元组将包含两个元素,一个用于服务模板中定义的每个接口(左和右)。此外,我们将服务模板接口映射到虚拟网络(向左映射到fourcade,向右映射到wierer)。

    现在,让我们看一下虚拟机。它有3个端口:eth0连接到我们不关心的虚拟网络,eth1连接到fourcade,eth2连接到wierer。下一步是什么?很明显!端口元组将包括eth1和eth2。这就是我们告诉Tungsten Fabric在遍历服务实例时流量应该流向何处的方式。

    没有什么能阻止我们让单个服务实例拥有多个端口元组……ECMP怎么办?active/backup如何处理?…有主意吗?我们稍后会处理。

    现在,让我们先聚焦这个用例。我们现在到哪儿了?来自fourcade网络中的VM的流量,要发往更wierer网络中的一个IP地址。流量需要从fourcade到wierer。这个通信在网络策略中是允许的。由于网络策略告诉从fourcade到wierer的流量必须经过服务实例,因此数据包被发送到VM eth1端口,并将从eth2端口进入wierer网络。

    2b16c31a-e063-494c-90be-67e11469d30f-image.png

    由于Tungsten Fabric是基于流的,因此可以保证对称性和粘性!

    这就是珠穆朗玛峰的理论。

    在下篇文章中,我们将看到一个真实的例子。这将使我们看到创建一条服务链有多么容易,以及Tungsten Fabric如何掩盖了所有的复杂性!

    Tungsten Fabric 架构解析系列文章——

    第一篇:TF主要特点和用例
    第二篇:TF怎么运作
    第三篇:详解vRouter体系结构
    第四篇:TF的服务链
    第五篇:vRouter的部署选项
    第六篇:TF如何收集、分析、部署?
    第七篇:TF如何编排
    第八篇:TF支持API一览
    第九篇:TF如何连接到物理网络?
    第十篇: TF基于应用程序的安全策略

    posted in 博客
  • Tungsten Fabric解决方案指南-Kubernetes集成(下)

    Tungsten Fabric解决方案指南-Kubernetes集成(上)

    隔离端口

    {    "fq_name": [        "default-domain",        "kubernetes",        "dev-client__c64b3b12-f7b5-11e7-8f66-52540065dced"    ],    "virtual_machine_interface_mac_addresses": {        "mac_address": [            "02:c6:4b:3b:12:f7"        ]    },    "display_name": "dev__dev-client",    "security_group_refs": [        {            "to": [                "default-domain",                "kubernetes",                "k8s-default-dev-sg"            ],            "href": "http://127.0.0.1:8082/security-group/579019d5-038e-4901-b6ab-ed146022dd70",            "attr": null,            "uuid": "579019d5-038e-4901-b6ab-ed146022dd70"        },        {            "to": [                "default-domain",                "kubernetes",                "k8s-default-dev-default"            ],            "href": "http://127.0.0.1:8082/security-group/e43caf6e-6b35-40c3-b336-83c155078efe",            "attr": null,            "uuid": "e43caf6e-6b35-40c3-b336-83c155078efe"        }    ],    "routing_instance_refs": [        {            "to": [                "default-domain",                "kubernetes",                "dev-vn",                "dev-vn"            ],            "href": "http://127.0.0.1:8082/routing-instance/45173786-a1b4-4c75-8ef0-590de67d2d05",            "attr": {                "direction": "both",                "protocol": null,                "ipv6_service_chain_address": null,                "dst_mac": null,                "mpls_label": null,                "vlan_tag": null,                "src_mac": null,                "service_chain_address": null            },            "uuid": "45173786-a1b4-4c75-8ef0-590de67d2d05"        }    ],    "virtual_machine_interface_disable_policy": false,    "parent_type": "project",    "perms2": {        "owner": "None",        "owner_access": 7,        "global_access": 0,        "share": []    },    "virtual_network_refs": [        {            "to": [                "default-domain",                "kubernetes",                "dev-vn"            ],            "href": "http://127.0.0.1:8082/virtual-network/ce01826b-e3e6-407f-8798-80612018e89c",            "attr": null,            "uuid": "ce01826b-e3e6-407f-8798-80612018e89c"        }    ],    "id_perms": {        "enable": true,        "description": null,        "creator": null,        "created": "2018-01-12T16:29:34.640295",        "uuid": {            "uuid_mslong": 14288579195414319591,            "uuid_lslong": 10333036915785587949        },        "user_visible": true,        "last_modified": "2018-01-12T16:29:34.708511",        "permissions": {            "owner": "cloud-admin",            "owner_access": 7,            "other_access": 7,            "group": "cloud-admin-group",            "group_access": 7        }    },    "virtual_machine_refs": [        {            "to": [                "dev-client__c64878a1-f7b5-11e7-9dbb-98f2b3a33b90"            ],            "href": "http://127.0.0.1:8082/virtual-machine/c64878a1-f7b5-11e7-9dbb-98f2b3a33b90",            "attr": null,            "uuid": "c64878a1-f7b5-11e7-9dbb-98f2b3a33b90"        }    ],    "vlan_tag_based_bridge_domain": false,    "port_security_enabled": true,    "annotations": {        "key_value_pair": [            {                "key": "cluster",                "value": "k8s-default"            },            {                "key": "kind",                "value": "Pod"            },            {                "key": "namespace",                "value": "dev"            },            {                "key": "project",                "value": "kubernetes"            },            {                "key": "name",                "value": "dev-client"            },            {                "key": "owner",                "value": "k8s"            }        ]    },    "uuid": "c64b3b12-f7b5-11e7-8f66-52540065dced"}{    "fq_name": [        "dev-client__c65c2a12-f7b5-11e7-8f66-52540065dced"    ],    "uuid": "c65c2a12-f7b5-11e7-8f66-52540065dced",    "service_health_check_ip": false,    "instance_ip_address": "10.47.255.250",    "perms2": {        "owner": "cloud-admin",        "owner_access": 7,        "global_access": 0,        "share": []    },    "annotations": {        "key_value_pair": [            {                "key": "cluster",                "value": "k8s-default"            },            {                "key": "kind",                "value": "Pod"            },            {                "key": "namespace",                "value": "dev"            },            {                "key": "project",                "value": "kubernetes"            },            {                "key": "name",                "value": "dev-client"            },            {                "key": "owner",                "value": "k8s"            }        ]    },    "subnet_uuid": "4b421367-165a-4555-80ab-2cff90cb9401",    "id_perms": {        "enable": true,        "description": null,        "creator": null,        "created": "2018-01-12T16:29:34.763793",        "uuid": {            "uuid_mslong": 14293345578320728551,            "uuid_lslong": 10333036915785587949        },        "user_visible": true,        "last_modified": "2018-01-12T16:29:34.810063",        "permissions": {            "owner": "cloud-admin",            "owner_access": 7,            "other_access": 7,            "group": "cloud-admin-group",            "group_access": 7        }    },    "virtual_machine_interface_refs": [        {            "to": [                "default-domain",                "kubernetes",                "dev-client__c64b3b12-f7b5-11e7-8f66-52540065dced"            ],            "href": "http://127.0.0.1:8082/virtual-machine-interface/c64b3b12-f7b5-11e7-8f66-52540065dced",            "attr": null,            "uuid": "c64b3b12-f7b5-11e7-8f66-52540065dced"        }    ],    "service_instance_ip": false,    "instance_ip_local_ip": false,    "virtual_network_refs": [        {            "to": [                "default-domain",                "kubernetes",                "dev-vn"            ],            "href": "http://127.0.0.1:8082/virtual-network/ce01826b-e3e6-407f-8798-80612018e89c",            "attr": null,            "uuid": "ce01826b-e3e6-407f-8798-80612018e89c"        }    ],    "instance_ip_secondary": false,    "display_name": "dev__dev-client"}
    

    附录B Service

    B.1 LB VMI

    {    "fq_name": [        "default-domain",        "kubernetes",        "svc-dev-web__20c27603-2d0f-45f5-9647-defe4adaba9a"    ],    "virtual_machine_interface_mac_addresses": {        "mac_address": [            "02:20:c2:76:03:2d"        ]    },    "display_name": "dev-share__svc-dev-web",    "security_group_refs": [        {            "to": [                "default-domain",                "kubernetes",                "k8s-default-dev-share-sg"            ],            "href": "http://127.0.0.1:8082/security-group/791f1c7e-a66e-4c47-ba05-409f00ee2c8e",            "attr": null,            "uuid": "791f1c7e-a66e-4c47-ba05-409f00ee2c8e"        },        {            "to": [                "default-domain",                "kubernetes",                "k8s-default-dev-share-default"            ],            "href": "http://127.0.0.1:8082/security-group/ad29de07-5ef6-4f55-86bb-52c44827c09d",            "attr": null,            "uuid": "ad29de07-5ef6-4f55-86bb-52c44827c09d"        }    ],    "routing_instance_refs": [        {            "to": [                "default-domain",                "kubernetes",                "cluster-network",                "cluster-network"            ],            "href": "http://127.0.0.1:8082/routing-instance/5ed7608a-28bb-4735-a8d8-2e9132b03d62",            "attr": {                "direction": "both",                "protocol": null,                "ipv6_service_chain_address": null,                "dst_mac": null,                "mpls_label": null,                "vlan_tag": null,                "src_mac": null,                "service_chain_address": null            },            "uuid": "5ed7608a-28bb-4735-a8d8-2e9132b03d62"        }    ],    "virtual_machine_interface_disable_policy": false,    "parent_type": "project",    "perms2": {        "owner": "None",        "owner_access": 7,        "global_access": 0,        "share": []    },    "virtual_network_refs": [        {            "to": [                "default-domain",                "kubernetes",                "cluster-network"            ],            "href": "http://127.0.0.1:8082/virtual-network/1b9f7f74-17f0-493a-9108-729f91b43598",            "attr": null,            "uuid": "1b9f7f74-17f0-493a-9108-729f91b43598"        }    ],    "id_perms": {        "enable": true,        "description": null,        "creator": null,        "created": "2018-01-12T15:21:05.324801",        "uuid": {            "uuid_mslong": 2360578910708516341,            "uuid_lslong": 10828869012794555034        },        "user_visible": true,        "last_modified": "2018-01-12T15:21:05.365345",        "permissions": {            "owner": "cloud-admin",            "owner_access": 7,            "other_access": 7,            "group": "cloud-admin-group",            "group_access": 7        }    },    "vlan_tag_based_bridge_domain": false,    "virtual_machine_interface_device_owner": "K8S:LOADBALANCER",    "port_security_enabled": true,    "uuid": "20c27603-2d0f-45f5-9647-defe4adaba9a"}
    

    B.2 LB IP实例和浮动IP

    IP实例

    {    "fq_name": [        "svc-dev-web__ff9782ea-f79d-423e-af9e-cde45ef847f2"    ],    "uuid": "ff9782ea-f79d-423e-af9e-cde45ef847f2",    "service_health_check_ip": false,    "instance_ip_address": "10.167.87.84",    "perms2": {        "owner": "cloud-admin",        "owner_access": 7,        "global_access": 0,        "share": []    },    "virtual_network_refs": [        {            "to": [                "default-domain",                "kubernetes",                "cluster-network"            ],            "href": "http://127.0.0.1:8082/virtual-network/1b9f7f74-17f0-493a-9108-729f91b43598",            "attr": null,            "uuid": "1b9f7f74-17f0-493a-9108-729f91b43598"        }    ],    "id_perms": {        "enable": true,        "description": null,        "creator": null,        "created": "2018-01-12T15:21:05.433006",        "uuid": {            "uuid_mslong": 18417333146843169342,            "uuid_lslong": 12654778383687239666        },        "user_visible": true,        "last_modified": "2018-01-12T15:21:05.433006",        "permissions": {            "owner": "cloud-admin",            "owner_access": 7,            "other_access": 7,            "group": "cloud-admin-group",            "group_access": 7        }    },    "virtual_machine_interface_refs": [        {            "to": [                "default-domain",                "kubernetes",                "svc-dev-web__20c27603-2d0f-45f5-9647-defe4adaba9a"            ],            "href": "http://127.0.0.1:8082/virtual-machine-interface/20c27603-2d0f-45f5-9647-defe4adaba9a",            "attr": null,            "uuid": "20c27603-2d0f-45f5-9647-defe4adaba9a"        }    ],    "service_instance_ip": false,    "instance_ip_local_ip": false,    "instance_ip_secondary": false,    "display_name": "svc-dev-web"}
    

    Floating IP

    {    "project_refs": [        {            "to": [                "default-domain",                "kubernetes"            ],            "href": "http://127.0.0.1:8082/project/46c31b9b-d21c-4c27-9445-6c94db948b6d",            "attr": null,            "uuid": "46c31b9b-d21c-4c27-9445-6c94db948b6d"        }    ],    "fq_name": [        "svc-dev-web__ff9782ea-f79d-423e-af9e-cde45ef847f2",        "dee62bd0-ed5a-4ac5-b7d7-dc6f329cdba7"    ],    "uuid": "dee62bd0-ed5a-4ac5-b7d7-dc6f329cdba7",    "floating_ip_port_mappings": {        "port_mappings": [            {                "protocol": "TCP",                "src_port": 80,                "dst_port": 80            }        ]    },    "parent_type": "instance-ip",    "perms2": {        "owner": "cloud-admin",        "owner_access": 7,        "global_access": 0,        "share": []    },    "id_perms": {        "enable": true,        "description": null,        "creator": null,        "created": "2018-01-12T15:21:05.562790",        "uuid": {            "uuid_mslong": 16061573297398762181,            "uuid_lslong": 13247299199082224551        },        "user_visible": true,        "last_modified": "2018-01-12T15:21:06.073466",        "permissions": {            "owner": "cloud-admin",            "owner_access": 7,            "other_access": 7,            "group": "cloud-admin-group",            "group_access": 7        }    },    "floating_ip_address": "10.167.87.84",    "virtual_machine_interface_refs": [        {            "to": [                "default-domain",                "kubernetes",                "dev-web-669n0__59f3d2a8-f7ab-11e7-8f66-52540065dced"            ],            "href": "http://127.0.0.1:8082/virtual-machine-interface/59f3d2a8-f7ab-11e7-8f66-52540065dced",            "attr": null,            "uuid": "59f3d2a8-f7ab-11e7-8f66-52540065dced"        },        {            "to": [                "default-domain",                "kubernetes",                "dev-web-k528t__5a1fc03e-f7ab-11e7-8f66-52540065dced"            ],            "href": "http://127.0.0.1:8082/virtual-machine-interface/5a1fc03e-f7ab-11e7-8f66-52540065dced",            "attr": null,            "uuid": "5a1fc03e-f7ab-11e7-8f66-52540065dced"        }    ],    "floating_ip_port_mappings_enable": true,    "display_name": "dee62bd0-ed5a-4ac5-b7d7-dc6f329cdba7",    "floating_ip_traffic_direction": "ingress"}
    

    B.3 LB

    Loadbalancer

    {    "fq_name": [        "default-domain",        "kubernetes",        "svc-dev-web__34f826d8-f7ac-11e7-9dbb-98f2b3a33b90"    ],    "uuid": "34f826d8-f7ac-11e7-9dbb-98f2b3a33b90",    "service_appliance_set_refs": [        {            "to": [                "default-global-system-config",                "native"            ],            "href": "http://127.0.0.1:8082/service-appliance-set/d5cf94dd-6556-40fc-b3dd-0020dacf7cfc",            "attr": null,            "uuid": "d5cf94dd-6556-40fc-b3dd-0020dacf7cfc"        }    ],    "parent_type": "project",    "perms2": {        "owner": "None",        "owner_access": 7,        "global_access": 0,        "share": []    },    "loadbalancer_properties": {        "status": null,        "provisioning_status": "ACTIVE",        "admin_state": true,        "vip_address": "10.167.87.84",        "vip_subnet_id": null,        "operating_status": "ONLINE"    },    "id_perms": {        "enable": true,        "description": null,        "creator": null,        "created": "2018-01-12T15:21:05.486093",        "uuid": {            "uuid_mslong": 3816843397506535911,            "uuid_lslong": 11365846252762905488        },        "user_visible": true,        "last_modified": "2018-01-12T15:21:05.514920",        "permissions": {            "owner": "cloud-admin",            "owner_access": 7,            "other_access": 7,            "group": "cloud-admin-group",            "group_access": 7        }    },    "virtual_machine_interface_refs": [        {            "to": [                "default-domain",                "kubernetes",                "svc-dev-web__20c27603-2d0f-45f5-9647-defe4adaba9a"            ],            "href": "http://127.0.0.1:8082/virtual-machine-interface/20c27603-2d0f-45f5-9647-defe4adaba9a",            "attr": null,            "uuid": "20c27603-2d0f-45f5-9647-defe4adaba9a"        }    ],    "display_name": "dev-share__svc-dev-web",    "loadbalancer_provider": "native",    "annotations": {        "key_value_pair": [            {                "key": "cluster",                "value": "k8s-default"            },            {                "key": "kind",                "value": "Service"            },            {                "key": "namespace",                "value": "dev-share"            },            {                "key": "project",                "value": "kubernetes"            },            {                "key": "name",                "value": "svc-dev-web"            },            {                "key": "owner",                "value": "k8s"            }        ]    }}
    

    LB Listener

    {    "loadbalancer_listener_properties": {        "default_tls_container": null,        "protocol": "TCP",        "connection_limit": null,        "admin_state": true,        "sni_containers": [],        "protocol_port": 80    },    "fq_name": [        "default-domain",        "kubernetes",        "svc-dev-web__34f826d8-f7ac-11e7-9dbb-98f2b3a33b90-TCP-80-331d4fc1-7e80-47a7-a6a0-6cef54c37b6c"    ],    "uuid": "331d4fc1-7e80-47a7-a6a0-6cef54c37b6c",    "parent_type": "project",    "perms2": {        "owner": "None",        "owner_access": 7,        "global_access": 0,        "share": []    },    "id_perms": {        "enable": true,        "description": null,        "creator": null,        "created": "2018-01-12T15:21:05.564006",        "uuid": {            "uuid_mslong": 3683187762728552359,            "uuid_lslong": 12006716381744823148        },        "user_visible": true,        "last_modified": "2018-01-12T15:21:05.564006",        "permissions": {            "owner": "cloud-admin",            "owner_access": 7,            "other_access": 7,            "group": "cloud-admin-group",            "group_access": 7        }    },    "loadbalancer_refs": [        {            "to": [                "default-domain",                "kubernetes",                "svc-dev-web__34f826d8-f7ac-11e7-9dbb-98f2b3a33b90"            ],            "href": "http://127.0.0.1:8082/loadbalancer/34f826d8-f7ac-11e7-9dbb-98f2b3a33b90",            "attr": null,            "uuid": "34f826d8-f7ac-11e7-9dbb-98f2b3a33b90"        }    ],    "display_name": "svc-dev-web__34f826d8-f7ac-11e7-9dbb-98f2b3a33b90-TCP-80-331d4fc1-7e80-47a7-a6a0-6cef54c37b6c"}
    

    LB Pool

    {    "fq_name": [        "default-domain",        "kubernetes",        "svc-dev-web__34f826d8-f7ac-11e7-9dbb-98f2b3a33b90-TCP-80-331d4fc1-7e80-47a7-a6a0-6cef54c37b6c"    ],    "uuid": "3ed542dc-cbc5-4b47-aeb7-c35f8443a672",    "parent_type": "project",    "perms2": {        "owner": "None",        "owner_access": 7,        "global_access": 0,        "share": []    },    "loadbalancer_listener_refs": [        {            "to": [                "default-domain",                "kubernetes",                "svc-dev-web__34f826d8-f7ac-11e7-9dbb-98f2b3a33b90-TCP-80-331d4fc1-7e80-47a7-a6a0-6cef54c37b6c"            ],            "href": "http://127.0.0.1:8082/loadbalancer-listener/331d4fc1-7e80-47a7-a6a0-6cef54c37b6c",            "attr": null,            "uuid": "331d4fc1-7e80-47a7-a6a0-6cef54c37b6c"        }    ],    "id_perms": {        "enable": true,        "description": null,        "creator": null,        "created": "2018-01-12T15:21:05.646375",        "uuid": {            "uuid_mslong": 4527598516469844807,            "uuid_lslong": 12589746098345846386        },        "user_visible": true,        "last_modified": "2018-01-12T15:21:05.646375",        "permissions": {            "owner": "cloud-admin",            "owner_access": 7,            "other_access": 7,            "group": "cloud-admin-group",            "group_access": 7        }    },    "loadbalancer_pool_properties": {        "status": null,        "protocol": "TCP",        "subnet_id": null,        "session_persistence": null,        "admin_state": true,        "persistence_cookie_name": null,        "status_description": null,        "loadbalancer_method": null    },    "display_name": "svc-dev-web__34f826d8-f7ac-11e7-9dbb-98f2b3a33b90-TCP-80-331d4fc1-7e80-47a7-a6a0-6cef54c37b6c"}
    

    LB Member

    {    "fq_name": [        "default-domain",        "kubernetes",        "svc-dev-web__34f826d8-f7ac-11e7-9dbb-98f2b3a33b90-TCP-80-331d4fc1-7e80-47a7-a6a0-6cef54c37b6c",        "53d85c7f-6b13-482e-8706-92142bfa2543"    ],    "uuid": "53d85c7f-6b13-482e-8706-92142bfa2543",    "parent_type": "loadbalancer-pool",    "perms2": {        "owner": "None",        "owner_access": 7,        "global_access": 0,        "share": []    },    "id_perms": {        "enable": true,        "description": null,        "creator": null,        "created": "2018-01-12T15:21:05.811773",        "uuid": {            "uuid_mslong": 6041680602444548142,            "uuid_lslong": 9729624660315350339        },        "user_visible": true,        "last_modified": "2018-01-12T15:21:05.830431",        "permissions": {            "owner": "cloud-admin",            "owner_access": 7,            "other_access": 7,            "group": "cloud-admin-group",            "group_access": 7        }    },    "display_name": "53d85c7f-6b13-482e-8706-92142bfa2543",    "loadbalancer_member_properties": {        "status": null,        "status_description": null,        "weight": 1,        "admin_state": true,        "address": null,        "protocol_port": 80    },    "annotations": {        "key_value_pair": [            {                "key": "vm",                "value": "708154c6-f7ab-11e7-a9df-98f2b3a36be0"            },            {                "key": "vmi",                "value": "5a1fc03e-f7ab-11e7-8f66-52540065dced"            }        ]    }}
    

    B.4 外部FIP

    {    "project_refs": [        {            "to": [                "default-domain",                "kubernetes"            ],            "href": "http://127.0.0.1:8082/project/46c31b9b-d21c-4c27-9445-6c94db948b6d",            "attr": null,            "uuid": "46c31b9b-d21c-4c27-9445-6c94db948b6d"        }    ],    "fq_name": [        "default-domain",        "kubernetes",        "BGP",        "BGP",        "svc-dev-web__1526aa69-f7bf-11e7-9dbb-98f2b3a33b90120.136.134.67-externalIP"    ],    "uuid": "ac091da2-28d7-467f-bd49-10edb2885219",    "floating_ip_port_mappings": {        "port_mappings": [            {                "protocol": "TCP",                "src_port": 80,                "dst_port": 80            }        ]    },    "parent_type": "floating-ip-pool",    "perms2": {        "owner": "None",        "owner_access": 7,        "global_access": 0,        "share": []    },    "id_perms": {        "enable": true,        "description": null,        "creator": null,        "created": "2018-01-12T17:36:13.280888",        "uuid": {            "uuid_mslong": 12396472031621105279,            "uuid_lslong": 13639451559556829721        },        "user_visible": true,        "last_modified": "2018-01-12T17:36:13.424379",        "permissions": {            "owner": "cloud-admin",            "owner_access": 7,            "other_access": 7,            "group": "cloud-admin-group",            "group_access": 7        }    },    "floating_ip_address": "120.136.134.67",    "virtual_machine_interface_refs": [        {            "to": [                "default-domain",                "kubernetes",                "dev-web-669n0__59f3d2a8-f7ab-11e7-8f66-52540065dced"            ],            "href": "http://127.0.0.1:8082/virtual-machine-interface/59f3d2a8-f7ab-11e7-8f66-52540065dced",            "attr": null,            "uuid": "59f3d2a8-f7ab-11e7-8f66-52540065dced"        },        {            "to": [                "default-domain",                "kubernetes",                "svc-dev-web__78f5adca-cbfe-422a-810c-bb3be9c15589"            ],            "href": "http://127.0.0.1:8082/virtual-machine-interface/78f5adca-cbfe-422a-810c-bb3be9c15589",            "attr": null,            "uuid": "78f5adca-cbfe-422a-810c-bb3be9c15589"        },        {            "to": [                "default-domain",                "kubernetes",                "dev-web-k528t__5a1fc03e-f7ab-11e7-8f66-52540065dced"            ],            "href": "http://127.0.0.1:8082/virtual-machine-interface/5a1fc03e-f7ab-11e7-8f66-52540065dced",            "attr": null,            "uuid": "5a1fc03e-f7ab-11e7-8f66-52540065dced"        }    ],    "floating_ip_port_mappings_enable": true,    "display_name": "svc-dev-web__1526aa69-f7bf-11e7-9dbb-98f2b3a33b90120.136.134.67-externalIP",    "floating_ip_traffic_direction": "ingress"}
    

    推荐阅读

    Tungsten Fabric解决方案指南-Gateway MX

    “Tungsten Fabric+K8s集成指南”系列文章——

    第一篇:部署准备与初始状态
    第二篇:创建虚拟网络
    第三篇:创建安全策略
    第四篇:创建隔离命名空间

    “Tungsten Fabric+K8s轻松上手”系列文章——
    第一篇:TF Carbide 评估指南--准备篇
    第二篇:通过Kubernetes的服务进行基本应用程序连接
    第三篇:通过Kubernetes Ingress进行高级外部应用程序连接
    第四篇:通过Kubernetes命名空间实现初步的应用程序隔离
    第五篇:通过Kubernetes网络策略进行应用程序微分段

    posted in 博客
  • Tungsten Fabric解决方案指南-Kubernetes集成(上)

    6a039d77-919e-4629-a316-c6645a8a8b8e-image.png
    作者:Tony Liu 译者:TF编译组

    1 Kubernetes与TF的集成

    集成方案中,在Kubernetes和Tungsten Fabric(编者按:原文为Contrail,其开源版已更名为Tungsten Fabric,本文出现Contrail之处均以Tungsten Fabric替换)之间有两个连接。

    • contrail-kube-manager和kube-api-server
    • Contrail CNI

    1.1 contrail-kube-manager

    此服务连接到kube-api-server以接收更新。然后,它会连接到Tungsten Fabric配置API服务器,来创建必要的配置(VM,VMI /端口,IP等),以将容器连接到overlay层。它还会将更新发送到kube-api-server。

    1.2 Contrail CNI

    每个node/minion上的Kubelet都使用CNI参数运行。启动容器时,kubelet调用CNI来建立网络。Contrail CNI连接到vRouter代理REST API:

    1)获得必要的配置
    2)将容器网络接口插入vRouter

    1.3 Gateway

    Tungsten Fabric使用gateway连接overlay和underlay网络,以提供外部访问。我们需要使用gateway来支持Kubernetes的暴露服务和ingress功能。

    必须在Tungsten Fabric中创建一个浮动IP池。

    在/etc/contrailctl/kubemanager.conf中配置这个FIP池的FQ名称,以进行配置(provisioning)。

    [KUBERNETES_VNC]public_fip_pool = {'domain': 'default-domain', 'project': 'default', 'network': 'public', 'name': 'public-fip-pool'}
    

    在容器condir-kube-manager中,浮动IP池在/etc/contrail/contrail-kubernetes.conf中进行配置。

    [VNC]public_fip_pool = {'domain': 'default-domain', 'project': 'default', 'network': 'public', 'name': 'public-fip-pool'}
    

    在Kubernetes中公开service或创建ingress时,将从该池中分配一个FIP作为外部IP。

    2 Namespace

    与Tungsten Fabric集成时,Kubernetes命名空间(namespace)可以映射到项目/租户(project/tenant)或虚拟网络。

    2.1 单租户(Single-tenant)

    如果有设置/etc/contrail/contrail-kubernetes.conf中的[KUBERNETES].cluster_project,它是单租户(single-tenant),Kubernetes命名空间将映射到Tungsten Fabric中的虚拟网络。所有非隔离命名空间都映射到默认虚拟网络“cluster-network”。而每个隔离命名空间都映射到一个单独的虚拟网络“-vn”。

    这里有一个示例,说明在/etc/contrailctl/kubemanager.conf中设置[KUBERNETES].cluster_project以启用单租户的情形。

    [KUBERNETES]cluster_project = {'domain': 'default-domain', 'project': 'kubernetes'}
    

    以下是由Condir-kube-manager在初始化期间创建的。

    • Flat IPAM :具有子网的pod-ipam

    • IPAM :service-ipam

    • 虚拟网络“cluster-network”的安全组k8s-default-default-default和k8s-default-default-sg

    • 虚拟网络:具有pod-ipam和service-ipam的cluster-network

    (参见附录A.1)

    2.1.1 非隔离的命名空间

    创建一个非隔离的命名空间。

    apiVersion: v1kind: Namespacemetadata: name: "dev-unisolated"
    

    当Kubernetes创建一个非隔离命名空间时,Tungsten Fabric将创建两个SG,即k8s-default--sg和k8s-default--sg。这里不创建虚拟网络。所有非隔离的NS中的容器都将位于cluster-network上。

    在非隔离命名空间中启动一个Pod。

    apiVersion: v1kind: Podmetadata:  name: nginx-1spec:  containers:  - name: nginx    image: docker.io/nginx    imagePullPolicy: IfNotPresentkubectl create -f nginx-1.yaml -n kubectl get pods -n 
    

    在非隔离命名空间中启动Pod时,Tungsten Fabric(contrail-kube-manager)将执行以下操作。
    53e81c03-065f-4667-86a8-0cb70a7c7b79-image.png

    不同非隔离命名空间中的Pod可以相互连接,因为它们位于Tungsten Fabric中的同一虚拟网络上。

    2.1.2 隔离的命名空间

    创建一个隔离的命名空间。

    apiVersion: v1kind: Namespacemetadata: name: "dev-isolated" annotations: {   "opencontrail.org/isolation" : "true" }
    

    在Kubernetes中创建隔离命名空间时,Tungsten Fabric将创建以下内容。

    96164191-af5c-47b7-853d-f9f115474381-image.png

    由于端口位于不同的虚拟网络上,因此不同的隔离命名空间中的Pods无法相互连接。

    2.2 多租户(Multi-tenant)

    如果未设置/etc/contrail/contrail-kubernetes.conf中的[KUBERNETES].cluster_project,它就是多租户,Kubernetes命名空间将映射到Tungsten Fabric中的租户/项目(tenant/project)。非隔离命名空间中的Pod在默认虚拟网络“cluster-network”上启动。而每个隔离命名空间都映射到一个单独的虚拟网络“-vn”。

    在初始化时,contrail-kube-manager创建以下内容:

    • 为每个现有的Kubenetes命名空间(如default、kube-public和kube-system)提供一个项目/租户。
    • Flat IPAM 默认域:default:pod-ipam
    • IPAM 默认域:default:service-ipam
    • 每一个命名空间的安全组:k8s-default--sg和k8s-default--sg
    • 虚拟网络默认域名:具有pod-ipam的default:cluster-network和service-ipam

    2.2.1 非隔离的命名空间

    创建一个非隔离的命名空间。

    apiVersion: v1kind: Namespacemetadata: name: "dev-unisolated"
    

    Contrail-kube-manager将创建以下内容。

    89734530-ac26-4fdf-86c5-48bb9f7d13dd-image.png

    不同非隔离命名空间中的Pod可以相互连接,因为它们位于Tungsten Fabric中的同一虚拟网络上。

    2.2.2 隔离的命名空间

    创建一个隔离的命名空间。

    apiVersion: v1kind: Namespacemetadata: name: "dev-isolated" annotations: {   "opencontrail.org/isolation" : "true" }
    

    Contrail-cube-manager将创建以下内容。

    4c0c907c-bf5f-4748-8ce7-29f896030db2-image.png

    由于端口位于不同的虚拟网络上,因此不同的隔离命名空间中的Pods无法相互连接。

    2.3 自定义命名空间

    创建一个自定义命名空间。

    apiVersion: v1kind: Namespacemetadata: name: "dev-customized" annotations: {   "opencontrail.org/network": '{"domain": "default-domain", "project": "demo", "name": "red"}' }
    

    在自定义命名空间中启动Pod时,contrail-kube-manager将创建端口。

    • 在项目default-domain:default中
    • 在虚拟网络上映射到自定义命名空间
    • 从与该虚拟网络关联的IPAM上获取地址
    • 安全组?

    2.4指定虚拟网络上的Pod

    在指定的虚拟网络上启动Pod。

    当在指定的虚拟网络上启动Pod时,conutil-kube-manager将创建端口。

    • 在项目中映射到指定或默认的命名空间
    • 在指定的虚拟网络上
    • 从与特定虚拟网络关联的IPAM上获取地址
    • 安全组?

    2.5 Kubernetes网络策略

    Kubernetes网络策略将照常运行,它由Tungsten Fabric中的安全组实现。该版本将与4.0.1一起发布。

    2.6 POD SNAT

    Tungsten Fabric支持该功能,可以在Tungsten Fabric中配置一个路由器(配置对象),使其成为启动容器的虚拟网络的外部网关。这与支持OpenStack的外部网关是一样的。

    3 Service

    Kubernetes service支持ClusterIP,NodePort,LoadBalancer和ExternalName。它还支持使用ExternalIP指定IP。Tungsten Fabric支持ClusterIP和LoadBalancer,以及ExternalIP。

    在Kubernetes中创建service时,Tungsten Fabric中会创建一个负载均衡器(loadbalancer)。负载均衡器的提供者为“native”,而ECMP负载均衡由vRouter实现。浮动IP被创建为VIP。

    创建具有多个实例的应用程序。

    apiVersion: v1kind: ReplicationControllermetadata:  name: web-qaspec:  replicas: 2  selector:    app: web-qa  template:    metadata:      name: web-qa      labels:        app: web-qa    spec:      containers:      - name: web        image: docker.io/nginx        imagePullPolicy: IfNotPresent
    

    3.1 ClusterIP

    在这些应用程序前面创建service。默认的service类型是ClusterIP。

    kind: ServiceapiVersion: v1metadata:  name: web-qaspec:  selector:    app: web-qa  ports:    - protocol: TCP      port: 80      targetPort: 80
    

    当service被创建后,conventil-kube-manager将执行以下操作。
    76b82306-3ce3-4658-93b9-d1bb34c1d608-image.png

    当LB被创建后,“原生”LB驱动程序将执行以下操作。

    • 在FIP中设置端口映射。
    • 将所有成员的VMI添加到FIP。

    当service类型为ClusterIP时,只能在集群内访问该service。FIP从集群网络(cluster-network)中的service FIP池中分配,并映射到所有的Pod地址。当访问集群内的service地址时,vRouter将在Pod之间平衡流量。

    3.2 Loadbalancer

    创建一个LoadBalancer类型的service。

    kind: ServiceapiVersion: v1metadata:  name: web-qaspec:  selector:    app: web-qa  ports:    - protocol: TCP      port: 80      targetPort: 80  type: LoadBalancer
    

    对于服务类型LoadBalancer,服务被暴露于外部。从服务FIP池中分配FIP,用于集群内的访问,同时从公共FIP池中分配FIP,映射到所有POD地址。该FIP将被通告给网关,网关将在POD之间进行ECMP负载均衡。

    附录A 单租户(Single-tenant)

    A.1 IPAM

    :pod-ipam

    {    "fq_name": [        "default-domain",        "kubernetes",        "pod-ipam"    ],    "uuid": "c9641741-c785-456e-845b-a14a253c3572",    "ipam_subnet_method": "flat-subnet",    "parent_type": "project",    "perms2": {        "owner": "None",        "owner_access": 7,        "global_access": 0,        "share": []    },    "ipam_subnets": {        "subnets": [            {                "subnet": {                    "ip_prefix": "10.32.0.0",                    "ip_prefix_len": 12                },                "dns_server_address": "10.47.255.253",                "enable_dhcp": true,                "created": null,                "default_gateway": "10.47.255.254",                "dns_nameservers": [],                "dhcp_option_list": null,                "subnet_uuid": null,                "alloc_unit": 1,                "last_modified": null,                "host_routes": null,                "addr_from_start": null,                "subnet_name": null,                "allocation_pools": []            }        ]    },    "id_perms": {        "enable": true,        "description": null,        "creator": null,        "created": "2017-12-27T18:45:33.957901",        "uuid": {            "uuid_mslong": 14511749470582293870,            "uuid_lslong": 9537393975711511922        },        "user_visible": true,        "last_modified": "2017-12-27T18:45:33.957901",        "permissions": {            "owner": "cloud-admin",            "owner_access": 7,            "other_access": 7,            "group": "cloud-admin-group",            "group_access": 7        }    },    "display_name": "pod-ipam"}
    

    :service-ipam

    {    "fq_name": [        "default-domain",        "kubernetes",        "service-ipam"    ],    "uuid": "526f554a-0bf4-47c6-a8e4-768a3f98cef4",    "parent_type": "project",    "perms2": {        "owner": "None",        "owner_access": 7,        "global_access": 0,        "share": []    },    "id_perms": {        "enable": true,        "description": null,        "creator": null,        "created": "2017-12-27T18:45:34.000690",        "uuid": {            "uuid_mslong": 5940060210041472966,            "uuid_lslong": 12169982429206466292        },        "user_visible": true,        "last_modified": "2017-12-27T18:45:34.000690",        "permissions": {            "owner": "cloud-admin",            "owner_access": 7,            "other_access": 7,            "group": "cloud-admin-group",            "group_access": 7        }    },    "display_name": "service-ipam"}
    

    A.2 安全组

    k8s-default-dev-share-default

    {    "fq_name": [        "default-domain",        "kubernetes",        "k8s-default-dev-share-default"    ],    "uuid": "ad29de07-5ef6-4f55-86bb-52c44827c09d",    "parent_type": "project",    "perms2": {        "owner": "46c31b9b-d21c-4c27-9445-6c94db948b6d",        "owner_access": 7,        "global_access": 0,        "share": []    },    "security_group_id": 8000010,    "id_perms": {        "enable": true,        "description": "Default security group",        "creator": null,        "created": "2018-01-12T09:02:15.110429",        "uuid": {            "uuid_mslong": 12477748365846007637,            "uuid_lslong": 9708444424704868509        },        "user_visible": true,        "last_modified": "2018-01-12T15:45:08.899388",        "permissions": {            "owner": "cloud-admin",            "owner_access": 7,            "other_access": 7,            "group": "cloud-admin-group",            "group_access": 7        }    },    "security_group_entries": {        "policy_rule": [            {                "direction": ">",                "protocol": "any",                "dst_addresses": [                    {                        "security_group": "local",                        "subnet": null,                        "virtual_network": null,                        "subnet_list": [],                        "network_policy": null                    }                ],                "action_list": null,                "created": null,                "rule_uuid": "dc13bb48-e2a7-4c59-a0b8-740ecfcb9a2c",                "dst_ports": [                    {                        "end_port": 65535,                        "start_port": 0                    }                ],                "application": [],                "last_modified": null,                "ethertype": "IPv4",                "src_addresses": [                    {                        "security_group": null,                        "subnet": {                            "ip_prefix": "0.0.0.0",                            "ip_prefix_len": 0                        },                        "virtual_network": null,                        "subnet_list": [],                        "network_policy": null                    }                ],                "rule_sequence": null,                "src_ports": [                    {                        "end_port": 65535,                        "start_port": 0                    }                ]            },            {                "direction": ">",                "protocol": "any",                "dst_addresses": [                    {                        "security_group": "local",                        "subnet": null,                        "virtual_network": null,                        "subnet_list": [],                        "network_policy": null                    }                ],                "action_list": null,                "created": null,                "rule_uuid": "a84e2d98-2b8f-45ba-aa75-88494da73b11",                "dst_ports": [                    {                        "end_port": 65535,                        "start_port": 0                    }                ],                "application": [],                "last_modified": null,                "ethertype": "IPv6",                "src_addresses": [                    {                        "security_group": null,                        "subnet": {                            "ip_prefix": "::",                            "ip_prefix_len": 0                        },                        "virtual_network": null,                        "subnet_list": [],                        "network_policy": null                    }                ],                "rule_sequence": null,                "src_ports": [                    {                        "end_port": 65535,                        "start_port": 0                    }                ]            },            {                "direction": ">",                "protocol": "any",                "dst_addresses": [                    {                        "security_group": null,                        "subnet": {                            "ip_prefix": "0.0.0.0",                            "ip_prefix_len": 0                        },                        "virtual_network": null,                        "subnet_list": [],                        "network_policy": null                    }                ],                "action_list": null,                "created": null,                "rule_uuid": "b7752ec1-6037-4c7f-97a9-291893fbed64",                "dst_ports": [                    {                        "end_port": 65535,                        "start_port": 0                    }                ],                "application": [],                "last_modified": null,                "ethertype": "IPv4",                "src_addresses": [                    {                        "security_group": "local",                        "subnet": null,                        "virtual_network": null,                        "subnet_list": [],                        "network_policy": null                    }                ],                "rule_sequence": null,                "src_ports": [                    {                        "end_port": 65535,                        "start_port": 0                    }                ]            },            {                "direction": ">",                "protocol": "any",                "dst_addresses": [                    {                        "security_group": null,                        "subnet": {                            "ip_prefix": "::",                            "ip_prefix_len": 0                        },                        "virtual_network": null,                        "subnet_list": [],                        "network_policy": null                    }                ],                "action_list": null,                "created": null,                "rule_uuid": "ea5cd2a8-2d47-47c4-a9ab-390de2317246",                "dst_ports": [                    {                        "end_port": 65535,                        "start_port": 0                    }                ],                "application": [],                "last_modified": null,                "ethertype": "IPv6",                "src_addresses": [                    {                        "security_group": "local",                        "subnet": null,                        "virtual_network": null,                        "subnet_list": [],                        "network_policy": null                    }                ],                "rule_sequence": null,                "src_ports": [                    {                        "end_port": 65535,                        "start_port": 0                    }                ]            }        ]    },    "annotations": {        "key_value_pair": [            {                "key": "namespace",                "value": "dev-share"            },            {                "key": "cluster",                "value": "k8s-default"            },            {                "key": "kind",                "value": "Namespace"            },            {                "key": "project",                "value": "kubernetes"            },            {                "key": "name",                "value": "k8s-default-dev-share-default"            },            {                "key": "owner",                "value": "k8s"            }        ]    },    "display_name": "k8s-default-dev-share-default"}
    

    k8s-default-dev-share-sg

    {    "fq_name": [        "default-domain",        "kubernetes",        "k8s-default-dev-share-sg"    ],    "uuid": "791f1c7e-a66e-4c47-ba05-409f00ee2c8e",    "parent_type": "project",    "perms2": {        "owner": "46c31b9b-d21c-4c27-9445-6c94db948b6d",        "owner_access": 7,        "global_access": 0,        "share": []    },    "security_group_id": 8000017,    "id_perms": {        "enable": true,        "description": "Namespace security group",        "creator": null,        "created": "2018-01-12T09:02:15.236401",        "uuid": {            "uuid_mslong": 8727725933151013959,            "uuid_lslong": 13404190917597736078        },        "user_visible": true,        "last_modified": "2018-01-12T09:02:15.275407",        "permissions": {            "owner": "cloud-admin",            "owner_access": 7,            "other_access": 7,            "group": "cloud-admin-group",            "group_access": 7        }    },    "display_name": "k8s-default-dev-share-sg",    "annotations": {        "key_value_pair": [            {                "key": "namespace",                "value": "dev-share"            },            {                "key": "cluster",                "value": "k8s-default"            },            {                "key": "kind",                "value": "Namespace"            },            {                "key": "project",                "value": "kubernetes"            },            {                "key": "name",                "value": "k8s-default-dev-share-sg"            },            {                "key": "owner",                "value": "k8s"            }        ]    }}
    

    A.3 虚拟网络

    :cluster-network

    {    "virtual_network_properties": {        "forwarding_mode": "l3",        "allow_transit": null,        "network_id": null,        "mirror_destination": false,        "vxlan_network_identifier": null,        "rpf": null    },    "fq_name": [        "default-domain",        "kubernetes",        "cluster-network"    ],    "uuid": "1b9f7f74-17f0-493a-9108-729f91b43598",    "address_allocation_mode": "user-defined-subnet-only",    "mac_aging_time": 300,    "parent_type": "project",    "perms2": {        "owner": "None",        "owner_access": 7,        "global_access": 0,        "share": []    },    "display_name": "cluster-network",    "pbb_evpn_enable": false,    "mac_learning_enabled": false,    "id_perms": {        "enable": true,        "description": null,        "creator": null,        "created": "2017-12-27T18:45:34.062865",        "uuid": {            "uuid_mslong": 1990449696915605818,            "uuid_lslong": 10450728964983109016        },        "user_visible": true,        "last_modified": "2017-12-29T10:29:20.685414",        "permissions": {            "owner": "cloud-admin",            "owner_access": 7,            "other_access": 7,            "group": "cloud-admin-group",            "group_access": 7        }    },    "flood_unknown_unicast": false,    "layer2_control_word": false,    "port_security_enabled": true,    "network_ipam_refs": [        {            "to": [                "default-domain",                "kubernetes",                "service-ipam"            ],            "href": "http://127.0.0.1:8082/network-ipam/526f554a-0bf4-47c6-a8e4-768a3f98cef4",            "attr": {                "ipam_subnets": [                    {                        "subnet": {                            "ip_prefix": "10.167.0.0",                            "ip_prefix_len": 16                        },                        "dns_server_address": "10.167.255.253",                        "enable_dhcp": true,                        "created": null,                        "default_gateway": "10.167.255.254",                        "dns_nameservers": [],                        "dhcp_option_list": null,                        "subnet_uuid": "10a8de65-9de8-419b-b14c-180bf2ab3dc9",                        "alloc_unit": 1,                        "last_modified": null,                        "host_routes": null,                        "addr_from_start": null,                        "subnet_name": null,                        "allocation_pools": []                    }                ],                "host_routes": null            },            "uuid": "526f554a-0bf4-47c6-a8e4-768a3f98cef4"        },        {            "to": [                "default-domain",                "kubernetes",                "pod-ipam"            ],            "href": "http://127.0.0.1:8082/network-ipam/c9641741-c785-456e-845b-a14a253c3572",            "attr": {                "ipam_subnets": [                    {                        "subnet": null,                        "dns_server_address": null,                        "enable_dhcp": true,                        "created": null,                        "default_gateway": null,                        "dns_nameservers": [],                        "dhcp_option_list": null,                        "subnet_uuid": "d2b090ce-cbcc-4b00-b50a-cc1ed5468b00",                        "alloc_unit": 1,                        "last_modified": null,                        "host_routes": null,                        "addr_from_start": null,                        "subnet_name": null,                        "allocation_pools": []                    }                ],                "host_routes": null            },            "uuid": "c9641741-c785-456e-845b-a14a253c3572"        }    ],    "pbb_etree_enable": false,    "virtual_network_network_id": 5}
    

    :dev-vn

    {    "virtual_network_properties": {        "forwarding_mode": "l3",        "allow_transit": null,        "network_id": null,        "mirror_destination": false,        "vxlan_network_identifier": null,        "rpf": null    },    "fq_name": [        "default-domain",        "kubernetes",        "dev-vn"    ],    "uuid": "ce01826b-e3e6-407f-8798-80612018e89c",    "address_allocation_mode": "flat-subnet-only",    "mac_aging_time": 300,    "parent_type": "project",    "perms2": {        "owner": "None",        "owner_access": 7,        "global_access": 0,        "share": []    },    "display_name": "dev-vn",    "pbb_evpn_enable": false,    "mac_learning_enabled": false,    "id_perms": {        "enable": true,        "description": null,        "creator": null,        "created": "2018-01-09T11:40:06.196335",        "uuid": {            "uuid_mslong": 14844289246686494847,            "uuid_lslong": 9770700546218977436        },        "user_visible": true,        "last_modified": "2018-01-09T12:18:55.796399",        "permissions": {            "owner": "cloud-admin",            "owner_access": 7,            "other_access": 7,            "group": "cloud-admin-group",            "group_access": 7        }    },    "flood_unknown_unicast": false,    "layer2_control_word": false,    "port_security_enabled": true,    "network_ipam_refs": [        {            "to": [                "default-domain",                "kubernetes",                "pod-ipam"            ],            "href": "http://127.0.0.1:8082/network-ipam/c9641741-c785-456e-845b-a14a253c3572",            "attr": {                "ipam_subnets": [                    {                        "subnet": null,                        "dns_server_address": null,                        "enable_dhcp": true,                        "created": null,                        "default_gateway": null,                        "dns_nameservers": [],                        "dhcp_option_list": null,                        "subnet_uuid": "48ed8235-efcd-44a1-998c-659e4f5840f4",                        "alloc_unit": 1,                        "last_modified": null,                        "host_routes": null,                        "addr_from_start": null,                        "subnet_name": null,                        "allocation_pools": []                    }                ],                "host_routes": null            },            "uuid": "c9641741-c785-456e-845b-a14a253c3572"        }    ],    "annotations": {        "key_value_pair": [            {                "key": "cluster",                "value": "k8s-default"            },            {                "key": "kind",                "value": "Namespace"            },            {                "key": "namespace",                "value": "dev"            },            {                "key": "isolated",                "value": "True"            },            {                "key": "project",                "value": "kubernetes"            },            {                "key": "name",                "value": "dev"            },            {                "key": "owner",                "value": "k8s"            }        ]    },    "pbb_etree_enable": false,    "virtual_network_network_id": 11}
    

    A.4虚拟机接口和实例IP

    非隔离端口

    {    "fq_name": [        "default-domain",        "kubernetes",        "dev-web-k528t__5a1fc03e-f7ab-11e7-8f66-52540065dced"    ],    "virtual_machine_interface_mac_addresses": {        "mac_address": [            "02:5a:1f:c0:3e:f7"        ]    },    "display_name": "dev-share__dev-web-k528t",    "security_group_refs": [        {            "to": [                "default-domain",                "kubernetes",                "k8s-default-dev-share-default"            ],            "href": "http://127.0.0.1:8082/security-group/ad29de07-5ef6-4f55-86bb-52c44827c09d",            "attr": null,            "uuid": "ad29de07-5ef6-4f55-86bb-52c44827c09d"        },        {            "to": [                "default-domain",                "kubernetes",                "k8s-default-dev-share-sg"            ],            "href": "http://127.0.0.1:8082/security-group/791f1c7e-a66e-4c47-ba05-409f00ee2c8e",            "attr": null,            "uuid": "791f1c7e-a66e-4c47-ba05-409f00ee2c8e"        }    ],    "routing_instance_refs": [        {            "to": [                "default-domain",                "kubernetes",                "cluster-network",                "cluster-network"            ],            "href": "http://127.0.0.1:8082/routing-instance/5ed7608a-28bb-4735-a8d8-2e9132b03d62",            "attr": {                "direction": "both",                "protocol": null,                "ipv6_service_chain_address": null,                "dst_mac": null,                "mpls_label": null,                "vlan_tag": null,                "src_mac": null,                "service_chain_address": null            },            "uuid": "5ed7608a-28bb-4735-a8d8-2e9132b03d62"        }    ],    "virtual_machine_interface_disable_policy": false,    "parent_type": "project",    "perms2": {        "owner": "None",        "owner_access": 7,        "global_access": 0,        "share": []    },    "virtual_network_refs": [        {            "to": [                "default-domain",                "kubernetes",                "cluster-network"            ],            "href": "http://127.0.0.1:8082/virtual-network/1b9f7f74-17f0-493a-9108-729f91b43598",            "attr": null,            "uuid": "1b9f7f74-17f0-493a-9108-729f91b43598"        }    ],    "id_perms": {        "enable": true,        "description": null,        "creator": null,        "created": "2018-01-12T15:14:58.189964",        "uuid": {            "uuid_mslong": 6494120564367233511,            "uuid_lslong": 10333036915785587949        },        "user_visible": true,        "last_modified": "2018-01-12T15:14:58.253769",        "permissions": {            "owner": "cloud-admin",            "owner_access": 7,            "other_access": 7,            "group": "cloud-admin-group",            "group_access": 7        }    },    "virtual_machine_refs": [        {            "to": [                "dev-web-k528t__708154c6-f7ab-11e7-a9df-98f2b3a36be0"            ],            "href": "http://127.0.0.1:8082/virtual-machine/708154c6-f7ab-11e7-a9df-98f2b3a36be0",            "attr": null,            "uuid": "708154c6-f7ab-11e7-a9df-98f2b3a36be0"        }    ],    "vlan_tag_based_bridge_domain": false,    "port_security_enabled": true,    "annotations": {        "key_value_pair": [            {                "key": "cluster",                "value": "k8s-default"            },            {                "key": "kind",                "value": "Pod"            },            {                "key": "namespace",                "value": "dev-share"            },            {                "key": "project",                "value": "kubernetes"            },            {                "key": "name",                "value": "dev-web-k528t"            },            {                "key": "owner",                "value": "k8s"            }        ]    },    "uuid": "5a1fc03e-f7ab-11e7-8f66-52540065dced"}
    

    IP实例

    {    "fq_name": [        "dev-web-k528t__5a2f9cde-f7ab-11e7-8f66-52540065dced"    ],    "uuid": "5a2f9cde-f7ab-11e7-8f66-52540065dced",    "service_health_check_ip": false,    "instance_ip_address": "10.47.255.251",    "perms2": {        "owner": "cloud-admin",        "owner_access": 7,        "global_access": 0,        "share": []    },    "annotations": {        "key_value_pair": [            {                "key": "cluster",                "value": "k8s-default"            },            {                "key": "kind",                "value": "Pod"            },            {                "key": "namespace",                "value": "dev-share"            },            {                "key": "project",                "value": "kubernetes"            },            {                "key": "name",                "value": "dev-web-k528t"            },            {                "key": "owner",                "value": "k8s"            }        ]    },    "subnet_uuid": "d2b090ce-cbcc-4b00-b50a-cc1ed5468b00",    "id_perms": {        "enable": true,        "description": null,        "creator": null,        "created": "2018-01-12T15:14:58.323069",        "uuid": {            "uuid_mslong": 6498585268770771431,            "uuid_lslong": 10333036915785587949        },        "user_visible": true,        "last_modified": "2018-01-12T15:14:58.363792",        "permissions": {            "owner": "cloud-admin",            "owner_access": 7,            "other_access": 7,            "group": "cloud-admin-group",            "group_access": 7        }    },    "virtual_machine_interface_refs": [        {            "to": [                "default-domain",                "kubernetes",                "dev-web-k528t__5a1fc03e-f7ab-11e7-8f66-52540065dced"            ],            "href": "http://127.0.0.1:8082/virtual-machine-interface/5a1fc03e-f7ab-11e7-8f66-52540065dced",            "attr": null,            "uuid": "5a1fc03e-f7ab-11e7-8f66-52540065dced"        }    ],    "service_instance_ip": false,    "instance_ip_local_ip": false,    "virtual_network_refs": [        {            "to": [                "default-domain",                "kubernetes",                "cluster-network"            ],            "href": "http://127.0.0.1:8082/virtual-network/1b9f7f74-17f0-493a-9108-729f91b43598",            "attr": null,            "uuid": "1b9f7f74-17f0-493a-9108-729f91b43598"        }    ],    "instance_ip_secondary": false,    "display_name": "dev-share__dev-web-k528t"}
    

    Tungsten Fabric解决方案指南-Kubernetes集成(下)


    推荐阅读

    Tungsten Fabric解决方案指南-Gateway MX

    posted in 博客
  • TF实战丨使用Vagrant安装Tungsten Fabric

    b213f6e5-7a50-4161-b366-a2915e1c13e5-image.png

    本文为苏宁网络架构师陈刚的原创文章。

    01准备测试机

    在16G的笔记本没跑起来,就干脆拼凑了一台游戏工作室级别的机器:双路E5-2860v3 CPU,24核48线程,128G DDR4 ECC内存,NVME盘 512G。在上面开5个VM,假装是物理服务器。

    · 192.16.35.110 deployer

    · 192.16.35.111 tf控制器

    · 192.16.35.112 openstack服务器,同时也是计算节点

    · 192.16.35.113 k8s master

    · 192.16.35.114 k8s的Node k01,同时也是ops的计算节点

    直接使用vagrant拉镜像会很慢,就先下载下来:

    https://cloud.centos.org/centos/7/vagrant/x86_64/images/

    下载对应的VirtualBox.box文件。

    然后使用命令, 命名为vagrant的box:

    vagrant box add centos/7 CentOS-7-x86_64-Vagrant-2004_01.VirtualBox.box

    cat << EEOOFF > vagrantfile
    ### start 
    # -*- mode: ruby -*-
    # vi: set ft=ruby :
    Vagrant.require_version ">=2.0.3"
    
    # All Vagrant configuration is done below. The "2" in Vagrant.configure
    # configures the configuration version (we support older styles for
    # backwards compatibility). Please don't change it unless you know what
    # you're doing.
    
    ENV["LC_ALL"] = "en_US.UTF-8"
    
    VAGRANTFILE_API_VERSION = "2"
    
    Vagrant.configure("2") do |config|
      # The most common configuration options are documented and commented below.
      # For a complete reference, please see the online documentation at
      # https://docs.vagrantup.com.
    
      # Every Vagrant development environment requires a box. You can search for
      # boxes at https://atlas.hashicorp.com/search.
    
      config.vm.box = "geerlingguy/centos7"
      # config.vbguest.auto_update = false
      # config.vbguest.no_remote = true  
    
      config.vm.define "deployer" do | dp |
        dp.vm.provider "virtualbox" do | v |
          v.memory = "8000"
          v.cpus = 2
        end
        dp.vm.network "private_network", ip: "192.16.35.110", auto_config: true
        dp.vm.hostname = "deployer"
      end
    
      config.vm.define "tf" do | tf |
        tf.vm.provider "virtualbox" do | v |
          v.memory = "64000"
          v.cpus = 16
        end
        tf.vm.network "private_network", ip: "192.16.35.111", auto_config: true
        tf.vm.hostname = "tf"
      end
    
      config.vm.define "ops" do | os |
        os.vm.provider "virtualbox" do | v |
          v.memory = "16000"
          v.cpus = 4
        end
        os.vm.network "private_network",ip: "192.16.35.112",  auto_config: true
        os.vm.hostname = "ops"
      end
    
      config.vm.define "k8s" do | k8 |
        k8.vm.provider "virtualbox" do | v |
          v.memory = "8000"
          v.cpus = 2
        end
        k8.vm.network "private_network", ip: "192.16.35.113", auto_config: true
        k8.vm.hostname = "k8s"
      end
    
      config.vm.define "k01" do | k1 |
        k1.vm.provider "virtualbox" do | v |
          v.memory = "4000"
          v.cpus = 2
        end
        k1.vm.network "private_network", ip: "192.16.35.114", auto_config: true
        k1.vm.hostname = "k01"
      end
    
      config.vm.provision "shell", privileged: true, path: "./setup.sh"
    
    end
    
    
    EEOOFF
    
    cat << EEOOFF > setup.sh
    #!/bin/bash
    #
    # Setup vagrant vms.
    #
    
    set -eu
    
    # Copy hosts info
    cat < /etc/hosts
    127.0.0.1 localhost
    127.0.1.1 vagrant.vm vagrant
    
    192.16.35.110 deployer
    192.16.35.111 tf
    192.16.35.112 ops
    192.16.35.113 k8s
    192.16.35.114 k01
    
    
    # The following lines are desirable for IPv6 capable hosts
    ::1     localhost ip6-localhost ip6-loopback
    ff02::1 ip6-allnodes
    ff02::2 ip6-allrouters
    EOF
    
    systemctl stop firewalld
    systemctl disable firewalld
    iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat
    iptables -P FORWARD ACCEPT
    
    swapoff -a 
    sed -i 's/.*swap.*/#&/' /etc/fstab
    # swapoff -a && sysctl -w vm.swappiness=0
    
    # setenforce  0 
    sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/sysconfig/selinux 
    sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config 
    sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/sysconfig/selinux 
    sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/selinux/config  
    
    # modprobe ip_vs_rr
    modprobe br_netfilter
    
    yum -y update
    
    # sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No such file or directory
    # sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
    # yum install -y bridge-utils.x86_64
    # modprobe bridge
    # modprobe br_netfilter
    # Setup system vars
    
    yum install -y epel-release
    yum install -y yum-utils device-mapper-persistent-data lvm2 net-tools vim chrony python python-setuptools python-pip iproute lrzsz tree git
    
    yum install -y libguestfs-tools libvirt-python virt-install libvirt ansible
    
    pip install wheel --upgrade -i https://mirrors.aliyun.com/pypi/simple/
    pip install pip --upgrade -i https://mirrors.aliyun.com/pypi/simple/
    pip install ansible  netaddr --upgrade -i https://mirrors.aliyun.com/pypi/simple/
    
    # python-urllib3 should be installed before "pip install requests"
    # if install failed, pip uninstall urllib3, then reinstall python-urllib3
    # pip uninstall -y urllib3 | true
    # yum install -y python-urllib3 
    pip install requests -i https://mirrors.aliyun.com/pypi/simple/
    
    systemctl disable libvirtd.service
    systemctl disable dnsmasq
    systemctl stop libvirtd.service
    systemctl stop dnsmasq
    
    if [  -d "/root/.ssh" ]; then
          rm -rf /root/.ssh
    fi
    
    ssh-keygen -q -t rsa -N "" -f ~/.ssh/id_rsa
    
    cat ~/.ssh/id_rsa.pub > ~/.ssh/authorized_keys
    chmod go-rwx ~/.ssh/authorized_keys
    
    # 
    # timedatectl set-timezone Asia/Shanghai
    
    if [ -f "/etc/chrony.conf" ]; then
       mv /etc/chrony.conf /etc/chrony.conf.bak
    fi
    
    cat < /etc/chrony.conf
          allow 192.16.35.0/24
          server ntp1.aliyun.com iburst
          local stratum 10
          logdir /var/log/chrony
          rtcsync
          makestep 1.0 3
          driftfile /var/lib/chrony/drift
    EOF
    
    systemctl restart chronyd.service
    systemctl enable chronyd.service
    
    echo "* soft nofile 65536" >> /etc/security/limits.conf
    echo "* hard nofile 65536" >> /etc/security/limits.conf
    echo "* soft nproc 65536"  >> /etc/security/limits.conf
    echo "* hard nproc 65536"  >> /etc/security/limits.conf
    echo "* soft  memlock  unlimited"  >> /etc/security/limits.conf
    echo "* hard memlock  unlimited"  >> /etc/security/limits.conf
    
    if [ ! -d "/var/log/journal" ]; then
      mkdir /var/log/journal
    fi
    
    if [ ! -d "/etc/systemd/journald.conf.d" ]; then
      mkdir /etc/systemd/journald.conf.d
    fi
    
    cat < /etc/systemd/journald.conf.d/99-prophet.conf 
    [Journal]
    Storage=persistent
    
    Compress=yes
    
    SyncIntervalSec=5m
    RateLimitInterval=30s
    RateLimitBurst=1000
    
    SystemMaxUse=10G
    
    SystemMaxFileSize=200M
    
    ForwardToSyslog=no
    
    EOF
    
    systemctl restart systemd-journald
    
    
    EEOOFF
    

    02在所有的节点上安装docker

    CentOS

    例如:如果pip安装软件的速度很慢,可以考虑使用基于aliyun的pip加速

    · 各个节点设置pip加速

    mkdir .pip && tee ~/.pip/pip.conf < instances.yaml
    provider_config:  bms:    ssh_pwd: vagrant    ssh_user: root    ntpserver: ntp1.aliyun.com    domainsuffix: localinstances:  tf:    provider: bms    ip: 192.16.35.111    roles:      config_database:      config:      control:      analytics_database:      analytics:      webui:  ops:    provider: bms    ip: 192.16.35.112    roles:
          openstack:
          openstack_compute:  
          vrouter:
            PHYSICAL_INTERFACE: enp0s8
      k8s:    provider: bms    ip: 192.16.35.113    roles:
          k8s_master:
          k8s_node:
          kubemanager:
          vrouter:
            PHYSICAL_INTERFACE: enp0s8
      k01:    provider: bms    ip: 192.16.35.114    roles:
          openstack_compute:
          k8s_node:
          vrouter:
            PHYSICAL_INTERFACE: enp0s8
    contrail_configuration:  AUTH_MODE: keystone  KEYSTONE_AUTH_URL_VERSION: /v3
      KEYSTONE_AUTH_ADMIN_PASSWORD: vagrant
      CLOUD_ORCHESTRATOR: openstack
      CONTRAIL_VERSION: latest
      UPGRADE_KERNEL: true
      ENCAP_PRIORITY: "VXLAN,MPLSoUDP,MPLSoGRE"
      PHYSICAL_INTERFACE: enp0s8
    global_configuration:
      CONTAINER_REGISTRY: opencontrailnightly
    kolla_config:
      kolla_globals:    enable_haproxy: no    enable_ironic: "no"    enable_swift: "no"
        network_interface: "enp0s8"
      kolla_passwords:    keystone_admin_password: vagrant
    
    EOF
    
    export INSTANCES_FILE=instances.yaml
    docker cp $INSTANCES_FILE contrail_kolla_ansible_deployer:/root/contrail-ansible-deployer/config/instances.yaml
    

    05准备好所有节点的环境

    除了deployer,我在所有节点上都做了一遍。

    正常的做法是建个自己的repository放各种image,实验环境节点少,直接国内下载也很快的。

    注意python和python-py这两个包是冲突的,只能安装其中之一,最好先全卸载,再安装其中一个:

    pip uninstall docker-py docker
     pip install python
    
    yum -y install python-devel python-subprocess32 python-setuptools python-pip
    
     pip install --upgrade pip
    
     find / -name *subpro*.egg-info
     find / -name *subpro*.egg-info |xargs rm -rf
    
    pip install -I sixpip install -I docker-compose
    

    将k8s repository改成阿里的,缺省的Google源太慢或不通:vi

    playbooks/roles/k8s/tasks/RedHat.yml

    yum_repository:
    name: Kubernetes
    description: k8s repo
    baseurl: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
    gpgkey: https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    repo_gpgcheck: yes
    gpgcheck: yes
    when: k8s_package_version is defined
    

    playbook中安装这些需要访问海外网站,可以从国内下载,然后改个tag:

    k8s.gcr.io/kube-apiserver:v1.14.8
    k8s.gcr.io/kube-controller-manager:v1.14.8
    k8s.gcr.io/kube-scheduler:v1.14.8
    k8s.gcr.io/kube-proxy:v1.14.8
    k8s.gcr.io/pause:3.1
    k8s.gcr.io/etcd:3.3.10
    k8s.gcr.io/coredns:1.3.1
    

    换个方法变通处理一下

    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.14.8
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.14.8
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.14.8
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.14.8
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.3.10
    docker pull coredns/coredns:1.3.1
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.8.3
    

    再重新给下载的打个tag

    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.14.8 k8s.gcr.io/kube-apiserver:v1.14.8
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.14.8 k8s.gcr.io/kube-controller-manager:v1.14.8
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.14.8 k8s.gcr.io/kube-scheduler:v1.14.8
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.14.8 k8s.gcr.io/kube-proxy:v1.14.8
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.3.10 k8s.gcr.io/etcd:3.3.10
    docker tag docker.io/coredns/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.8.3  k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3
    

    06启动deployer容器,进入其中进行部署

    docker start contrail_kolla_ansible_deployer
    

    进入deployer容器:

    docker exec -it contrail_kolla_ansible_deployer bashcd /root/contrail-ansible-deployer
    ansible-playbook -i inventory/ -e orchestrator=openstack playbooks/provision_instances.yml
    ansible-playbook -i inventory/ -e orchestrator=openstack playbooks/configure_instances.yml
    ansible-playbook -i inventory/ -e orchestrator=openstack playbooks/install_openstack.yml
    ansible-playbook -i inventory/ -e orchestrator=openstack playbooks/install_k8s.yml
    ansible-playbook -i inventory/ -e orchestrator=openstack playbooks/install_contrail.yml
    
    
    
    kubectl taint nodes k8s node-role.kubernetes.io/master-
    

    最后一次kubelet升级到最新,遇到CSI的bug,修改一下配置文件后重启kubelet即可:

    After experiencing the same issue, editing /var/lib/kubelet/config.yaml to add:
    featureGates:  CSIMigration: false
    

    07安装完成后,建2个VM和容器测试一下

    yum install -y gcc python-devel
    pip install python-openstackclient
    pip install python-ironicclient
    
    source /etc/kolla/kolla-toolbox/admin-openrc.sh
    

    如果openstack命令有如下“queue”的报错,是需要python3:

    File "/usr/lib/python2.7/site-packages/openstack/utils.py", line 13, in 
        import queue
    ImportError: No module named queue
    
    rm -f /usr/bin/python
    ln -s /usr/bin/python3 /usr/bin/python
    pip install python-openstackclient
    pip install python-ironicclient
    yum install -y python3-pip
    
    yum install -y gcc python-devel wgetpip install --upgrade setuptoolspip install --ignore-installed python-openstackclient
    
    我每次都需要python3,所以干脆也安装了这个:
    pip3 install python-openstackclient -i https://mirrors.aliyun.com/pypi/simple/
    pip3 install python-ironicclient -i https://mirrors.aliyun.com/pypi/simple/
    

    进入Tungsten Fabric,用浏览器:https://192.16.35.111:8143

    进入openstack,用浏览器:https://192.16.35.112

    在k8s master上(192.16.35.113):

    scp root@192.16.35.114:/opt/cni/bin/contrail-k8s-cni /opt/cni/bin/
    mkdir /etc/cni/net.d
    scp root@192.16.35.114:/etc/cni/net.d/10-contrail.conf /etc/cni/net.d/10-contrail.conf
    

    wget

    https://github.com/cirros-dev/cirros/releases/download/0.4.0/cirros-0.4.0-x86_64-disk.img

    官方下载地址

    https://download.cirros-cloud.net/

    curl -O

    https://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img

    wget

    http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img

    wget

    http://download.cirros-cloud.net/daily/20161201/cirros-d161201-x86_64-disk.img

    (都没有找到带tcpdump的版本)

    reboot

    source /etc/kolla/kolla-toolbox/admin-openrc.sh

    openstack image create cirros --disk-format qcow2 --public --container-format bare --file cirros-0.4.0-x86_64-disk.imgnova flavor-create m1.tiny auto 512 1 1openstack network create net1openstack subnet create --subnet-range 10.1.1.0/24 --network net1 mysubnet1
    NET_ID=`openstack network list | grep net1 | awk -F '|' '{print $2}' | tr -d ' '`
     
    nova boot --image cirros --flavor m1.tiny --nic net-id=${NET_ID} VM1
    nova boot --image cirros --flavor m1.tiny --nic net-id=${NET_ID} VM2
    

    进入k8s_master, 192.16.35.113:

    yum install -y git
    git clone https://github.com/virtualhops/k8s-demo
    kubectl create -f k8s-demo/po-ubuntuapp.yml
    kubectl create -f k8s-demo/rc-frontend.yml
    kubectl expose rc/frontend
    kubectl exec -it ubuntuapp curl frontend # many times
    

    参考方案:
    https://github.com/Juniper/contrail-ansible-deployer/wiki/[-Container-Workflow]-Deploying-Contrail-with-OpenStack

    推荐阅读

    Tungsten Fabric实战:对接vMX虚拟路由平台填坑
    Tungsten Fabric实战:基于K8s的部署踩坑
    TF实战Q&A丨你不理解透,出了问题都不知道怎么弄
    TF 实战Q&A丨只在此网中,云深不知处

    posted in 博客
  • Tungsten Fabric入门宝典丨多编排器用法及配置(下)

    OpenStack+OpenStack

    我还没有尝试将两个OpenStack集群添加到一个Tungsten Fabric controller中,但如果它们不使用相同的租户名称,那么应该是可行的。

    K8s+vCenter

    Kubernetes和vCenter的组合可以同时使用。用例类似于Kubernetes + OpenStack。

    OpenStack+vCenter

    OpenStack和vCenter的组合有点奇怪,因为OpenStack仪表盘可能用作vCenter网络的管理UI。

    根据我的尝试,vcenter-plugin会检查所有可用租户下的所有virtual-network,而不是“vCenter”租户下的virtual-network,因此,如果创建了virtual-network或其它Neutron组件,也可以在ESXi上的vRouterVM上使用。通过此设置,vCenter用户可以自己实现网络功能,就像使用EC2 / VPC一样。

    vCenter+vCenter

    多vCenter是一个重要话题,因为vCenter具有明确定义的最大配置,而多vCenter安装是解决这些问题的常用方法。

    在这种情况下,最简单的设置是在每个vCenter上配置多个Tungsten Fabric集群,但此时很难在两个集群之间进行vMotion,因为Tungsten Fabric在vMotion完成后会创建一个新的端口,并且可能会分配不同的固定IP。

    因此,我认为将多个vCenter分配给一个Tungsten Fabric集群,将会有比较合理的用例。

    根据我的尝试,在当前实现中,由于vcenter-plugin仅对某些对象使用“vCenter”租户,因此,如果不进行代码修改,就不可能同时使用两个vcenter-plugin。

    如果可以按vcenter-plugin和vcenter-manager修改租户,则可以为每个vCenter分配一个单独的租户,然后同时使用它们,就像同时使用Kubernetes和OpenStack一样。

    如果这是可行的,还可以在多vCenter的环境中使用service-insertion和物理交换机扩展。

    甚至SRM集成也可能采用这种方式,因为占位符VM将分配一个新的端口,可以对其进行编辑以分配正确的固定IP。

    K8s+OpenStack+vCenter

    我不知道是否会使用这样的配置,因为Kubernetes / OpenStack / vCenter具有一些功能重叠,尽管如果设置正确的话会工作良好。


    Tungsten Fabric入门宝典系列文章——
    1.首次启动和运行指南
    2.TF组件的七种“武器”
    3.编排器集成
    4.关于安装的那些事(上)
    5.关于安装的那些事(下)
    6.主流监控系统工具的集成
    7.开始第二天的工作
    8.8个典型故障及排查Tips
    9.关于集群更新的那些事
    10.说说L3VPN及EVPN集成
    11.关于服务链、BGPaaS及其它
    12.关于多集群和多数据中心

    Tungsten Fabric 架构解析系列文章——
    第一篇:TF主要特点和用例
    第二篇:TF怎么运作
    第三篇:详解vRouter体系结构
    第四篇:TF的服务链
    第五篇:vRouter的部署选项
    第六篇:TF如何收集、分析、部署?
    第七篇:TF如何编排
    第八篇:TF支持API一览
    第九篇:TF如何连接到物理网络?
    第十篇:TF基于应用程序的安全策略

    posted in 博客
  • Tungsten Fabric入门宝典丨多编排器用法及配置(上)
    Tungsten Fabric入门宝典系列文章,来自技术大牛倾囊相授的实践经验,由TF中文社区为您编译呈现,旨在帮助新手深入理解TF的运行、安装、集成、调试等全流程。如果您有相关经验或疑问,欢迎与我们互动,并与社区极客们进一步交流。更多TF技术文章,请点击公号底部按钮>学习>文章合集。
    
    作者:Tatsuya Naganawa  译者:TF编译组
    

    在多个编排器之间共享控制平面有很多好处,包括routing/bridging、DNS、security等。

    下面我来描述每种情况的使用方法和配置。

    K8s+OpenStack

    Kubernetes + OpenStack的组合已经涵盖并且运行良好。

    另外,Tungsten Fabric支持嵌套安装(nested installation)和非嵌套安装(non-nested installation),因此你可以选择其中一个选项。

    K8s+K8s

    将多个Kubernetes集群添加到一个Tungsten Fabric中,是一种可能的安装选项。

    由于kube-manager支持cluster_name参数,该参数修改了将要创建的租户名称(默认为“k8s”),因此这应该是可行的。不过,我在上次尝试该方法时效果不佳,因为有些对象被其它kube-manager作为陈旧对象(stale object)删除了。

    在将来的版本中,可能会更改此行为。

    注意:

    从R2002及更高版本开始,这个修补程序解决了该问题,并且不再需要自定义修补程序。

    注意:应用这些补丁,似乎可以将多个kube-master添加到一个Tungsten Fabric集群中。

    diff --git a/src/container/kube-manager/kube_manager/kube_manager.py b/src/container/kube-manager/kube_manager/kube_manager.py
    index 0f6f7a0..adb20a6 100644
    --- a/src/container/kube-manager/kube_manager/kube_manager.py
    +++ b/src/container/kube-manager/kube_manager/kube_manager.py
    @@ -219,10 +219,10 @@ def main(args_str=None, kube_api_skip=False, event_queue=None,
    
         if args.cluster_id:
             client_pfx = args.cluster_id + '-'
    -        zk_path_pfx = args.cluster_id + '/'
    +        zk_path_pfx = args.cluster_id + '/' + args.cluster_name
         else:
             client_pfx = ''
    -        zk_path_pfx = ''
    +        zk_path_pfx = '' + args.cluster_name
    
         # randomize collector list
         args.random_collectors = args.collectors
    diff --git a/src/container/kube-manager/kube_manager/vnc/vnc_namespace.py b/src/container/kube-manager/kube_manager/vnc/vnc_namespace.py
    index 00cce81..f968cae 100644
    --- a/src/container/kube-manager/kube_manager/vnc/vnc_namespace.py
    +++ b/src/container/kube-manager/kube_manager/vnc/vnc_namespace.py
    @@ -594,7 +594,8 @@ class VncNamespace(VncCommon):
                     self._queue.put(event)
    
         def namespace_timer(self):
    -        self._sync_namespace_project()
    +        # self._sync_namespace_project() ## temporary disabled
    +        pass
    
         def _get_namespace_firewall_ingress_rule_name(self, ns_name):
             return "-".join([vnc_kube_config.cluster_name(),
    

    由于kube-master创建的pod-network都在同一个Tungsten Fabric controller上,因此在它们之间实现路由泄漏(route-leak)是可能的:)

    • 由于cluster_name将成为Tungsten Fabric的fw-policy中的标签之一,因此在多个Kubernetes集群之间也可以使用相同的标签
    172.31.9.29 Tungsten Fabric controller
    172.31.22.24 kube-master1 (KUBERNETES_CLUSTER_NAME=k8s1 is set)
    172.31.12.82 kube-node1 (it belongs to kube-master1)
    172.31.41.5 kube-master2(KUBERNETES_CLUSTER_NAME=k8s2 is set)
    172.31.4.1 kube-node2 (it belongs to kube-master2)
    
    
    [root@ip-172-31-22-24 ~]# kubectl get node
    NAME                                              STATUS     ROLES    AGE   VERSION
    ip-172-31-12-82.ap-northeast-1.compute.internal   Ready         57m   v1.12.3
    ip-172-31-22-24.ap-northeast-1.compute.internal   NotReady   master   58m   v1.12.3
    [root@ip-172-31-22-24 ~]# 
    
    [root@ip-172-31-41-5 ~]# kubectl get node
    NAME                                             STATUS     ROLES    AGE   VERSION
    ip-172-31-4-1.ap-northeast-1.compute.internal    Ready         40m   v1.12.3
    ip-172-31-41-5.ap-northeast-1.compute.internal   NotReady   master   40m   v1.12.3
    [root@ip-172-31-41-5 ~]# 
    
    [root@ip-172-31-22-24 ~]# kubectl get pod -o wide
    NAME                                 READY   STATUS    RESTARTS   AGE     IP              NODE                                              NOMINATED NODE
    cirros-deployment-75c98888b9-7pf82   1/1     Running   0          28m     10.47.255.249   ip-172-31-12-82.ap-northeast-1.compute.internal   
    cirros-deployment-75c98888b9-sgrc6   1/1     Running   0          28m     10.47.255.250   ip-172-31-12-82.ap-northeast-1.compute.internal   
    cirros-vn1                           1/1     Running   0          7m56s   10.0.1.3        ip-172-31-12-82.ap-northeast-1.compute.internal   
    [root@ip-172-31-22-24 ~]# 
    
    
    [root@ip-172-31-41-5 ~]# kubectl get pod -o wide
    NAME                                 READY   STATUS    RESTARTS   AGE     IP              NODE                                            NOMINATED NODE
    cirros-deployment-75c98888b9-5lqzc   1/1     Running   0          27m     10.47.255.250   ip-172-31-4-1.ap-northeast-1.compute.internal   
    cirros-deployment-75c98888b9-dg8bf   1/1     Running   0          27m     10.47.255.249   ip-172-31-4-1.ap-northeast-1.compute.internal   
    cirros-vn2                           1/1     Running   0          5m36s   10.0.2.3        ip-172-31-4-1.ap-northeast-1.compute.internal   
    [root@ip-172-31-41-5 ~]# 
    
    
    / # ping 10.0.2.3
    PING 10.0.2.3 (10.0.2.3): 56 data bytes
    64 bytes from 10.0.2.3: seq=83 ttl=63 time=1.333 ms
    64 bytes from 10.0.2.3: seq=84 ttl=63 time=0.327 ms
    64 bytes from 10.0.2.3: seq=85 ttl=63 time=0.319 ms
    64 bytes from 10.0.2.3: seq=86 ttl=63 time=0.325 ms
    ^C
    --- 10.0.2.3 ping statistics ---
    87 packets transmitted, 4 packets received, 95% packet loss
    round-trip min/avg/max = 0.319/0.576/1.333 ms
    / # 
    / # ip -o a
    1: lo:  mtu 65536 qdisc noqueue qlen 1000\    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    1: lo    inet 127.0.0.1/8 scope host lo\       valid_lft forever preferred_lft forever
    18: eth0@if19:  mtu 1500 qdisc noqueue \    link/ether 02:b9:11:c9:4c:b1 brd ff:ff:ff:ff:ff:ff
    18: eth0    inet 10.0.1.3/24 scope global eth0\       valid_lft forever preferred_lft forever
    / #
     -> ping between pods, which belong to different kubernetes clusters, worked well
    
    
    [root@ip-172-31-9-29 ~]# ./contrail-introspect-cli/ist.py ctr route show -t default-domain:k8s1-default:vn1:vn1.inet.0 
    
    default-domain:k8s1-default:vn1:vn1.inet.0: 2 destinations, 2 routes (1 primary, 1 secondary, 0 infeasible)
    
    10.0.1.3/32, age: 0:06:50.001343, last_modified: 2019-Jul-28 18:23:08.243656
        [XMPP (interface)|ip-172-31-12-82.local] age: 0:06:50.005553, localpref: 200, nh: 172.31.12.82, encap: ['gre', 'udp'], label: 50, AS path: None
    
    10.0.2.3/32, age: 0:02:25.188713, last_modified: 2019-Jul-28 18:27:33.056286
        [XMPP (interface)|ip-172-31-4-1.local] age: 0:02:25.193517, localpref: 200, nh: 172.31.4.1, encap: ['gre', 'udp'], label: 50, AS path: None
    [root@ip-172-31-9-29 ~]# 
    [root@ip-172-31-9-29 ~]# ./contrail-introspect-cli/ist.py ctr route show -t default-domain:k8s2-default:vn2:vn2.inet.0 
    
    default-domain:k8s2-default:vn2:vn2.inet.0: 2 destinations, 2 routes (1 primary, 1 secondary, 0 infeasible)
    
    10.0.1.3/32, age: 0:02:36.482764, last_modified: 2019-Jul-28 18:27:33.055702
        [XMPP (interface)|ip-172-31-12-82.local] age: 0:02:36.489419, localpref: 200, nh: 172.31.12.82, encap: ['gre', 'udp'], label: 50, AS path: None
    
    10.0.2.3/32, age: 0:04:37.126317, last_modified: 2019-Jul-28 18:25:32.412149
        [XMPP (interface)|ip-172-31-4-1.local] age: 0:04:37.133912, localpref: 200, nh: 172.31.4.1, encap: ['gre', 'udp'], label: 50, AS path: None
    [root@ip-172-31-9-29 ~]#
     -> each virtual-network in each kube-master has a route to other kube-master's pod, based on network-policy below
    
    
    
    (venv) [root@ip-172-31-9-29 ~]# contrail-api-cli --host 172.31.9.29 ls -l virtual-network
    virtual-network/f9d06d27-8fc1-413d-a6d6-c51c42191ac0  default-domain:k8s2-default:vn2
    virtual-network/384fb3ef-247b-42e6-a628-7111fe343f90  default-domain:k8s2-default:k8s2-default-service-network
    virtual-network/c3098210-983b-46bc-b750-d06acfc66414  default-domain:k8s1-default:k8s1-default-pod-network
    virtual-network/1ff6fdbd-ac2e-4601-b08c-5f7255466312  default-domain:default-project:ip-fabric
    virtual-network/d8d95738-0a00-457f-b21e-60304859d1f9  default-domain:k8s2-default:k8s2-default-pod-network
    virtual-network/0c075b76-4219-4f79-a4f5-1b4e6729f16e  default-domain:k8s1-default:k8s1-default-service-network
    virtual-network/985b3b5f-84b7-4810-a54d-abd09a37f525  default-domain:k8s1-default:vn1
    virtual-network/23782ea7-4000-491f-b20d-01c6ab9e2ba8  default-domain:default-project:default-virtual-network
    virtual-network/90cce352-ef9b-4358-81b3-ef87a9cb63e8  default-domain:default-project:__link_local__
    virtual-network/0292810c-c511-4147-89c0-9fdd571ccce8  default-domain:default-project:dci-network
    (venv) [root@ip-172-31-9-29 ~]# 
    
    (venv) [root@ip-172-31-9-29 ~]# contrail-api-cli --host 172.31.9.29 ls -l network-policy
    network-policy/134d38b2-79e2-4a3e-a2f7-a3d61ceaf5e2  default-domain:k8s1-default:vn1-to-vn2  
    posted in 博客
  • Tungsten Fabric解决方案指南-Gateway MX

    作者:Tony Liu 译者:TF编译组
    c8bf15ba-ac4a-4c1e-af11-470cb3b54414-image.png

    1 总览

    本指南介绍如何使用MX作为网关(gateway),为Tungsten Fabric(编者按:原文为Contrail,其开源版已更名为Tungsten Fabric,本文出现Contrail之处均以Tungsten Fabric替换)管理的overlay层提供external或underlay连接。

    根据性能要求,网关可以连接到主干(spine)或叶子(leaf)。

    2 Underlay/INET

    2.1 eBGP

    在典型的IP结构中,所有叶子(leaves)、主干(spines)和网关(gateways)都使用eBGP来建立underlay连接。

    2.2 iBGP

    对于iBGP,建议使用RR(路由反射器)以避免所有BGP节点之间的完全网状对等连接。

    3 Overlay/VPN

    3.1 环回地址

    在每个MX上都会分配并派发环回地址(loopback address)。它用于控制节点的BGP对等,以及vRouter的隧道(tunneling)。Tungsten Fabric和环回地址之间的连接由underlay提供。

    如果将单独的接口用于控制平面和数据平面,则当MX通告路由时,控制接口的地址将用作下一跳。要解决此问题,应将环回接口同时用于控制平面和数据平面。

    set interfaces lo0 unit 0 family inet address 10.6.0.31/32
    

    3.2 BGP

    3.2.1 AS

    通常,网关具有一个全局唯一ASN。

    set routing-options autonomous-system 64031
    

    3.2.2 eBGP and iBGP

    当Tungsten Fabric和网关位于不同的AS中时,将使用eBGP。

    set protocols bgp group vpn-contrail type external
    set protocols bgp group vpn-contrail multihop
    set protocols bgp group vpn-contrail local-address 10.6.0.31
    set protocols bgp group vpn-contrail keep all
    set protocols bgp group vpn-contrail family inet-vpn unicast
    set protocols bgp group vpn-contrail family evpn signaling
    set protocols bgp group vpn-contrail family route-target
    set protocols bgp group vpn-contrail neighbor 10.6.11.1 peer-as 64512
    

    当Tungsten Fabric和网关位于同一AS中时,将使用iBGP。

    set protocols bgp group vpn-contrail type internal
    set protocols bgp group vpn-contrail local-address 10.6.0.31
    set protocols bgp group vpn-contrail keep all
    set protocols bgp group vpn-contrail family inet-vpn unicast
    set protocols bgp group vpn-contrail family evpn signaling
    set protocols bgp group vpn-contrail family route-target
    set protocols bgp group vpn-contrail neighbor 10.6.11.1
    

    当网关全局ASN与Tungsten Fabric ASN不同时,可以使用local-as来启用iBGP。

    set protocols bgp group vpn-contrail type internal
    set protocols bgp group vpn-contrail local-address 10.6.0.31
    set protocols bgp group vpn-contrail local-as 64512
    set protocols bgp group vpn-contrail keep all
    set protocols bgp group vpn-contrail family inet-vpn unicast
    set protocols bgp group vpn-contrail family evpn signaling
    set protocols bgp group vpn-contrail family route-target
    set protocols bgp group vpn-contrail neighbor 10.6.11.1 peer-as 64512
    

    3.3 BGP Family

    3.3.1 L3VPN

    set protocols bgp group vpn-contrail family inet-vpn unicast
    

    3.3.2 EVPN

    set protocols bgp group vpn-contrail family evpn signaling
    

    3.3.3 Route Target

    set protocols bgp group vpn-contrail family route-target
    

    Family“route-target”是用于优化的。在MX上进行配置时,如果存在VRF导入策略,MX将会发布route-target路由。在将VPN-IPv4路由发布给邻居之前,MX还会检查route-target路由表。如果该路由中的route-target未被邻居通告,则MX不会通告该路由。

    如果控制平面和数据平面上的接口是分开的,则MX从Tungsten Fabric控制节点接收route-target路由。RT路由的下一跳是控制节点地址(在控制平面上)。MX会尝试解决数据平面上MPLS表(inet.3)中的下一跳,但是会失败。这样,RT路由不会生效,而会被隐藏。结果是MX没有发布路由。为了解决这个问题,可以在inet.3中添加静态路由,以使下一跳的控制接口可以被解析。然后,MX应用RT路由并发布路由。Tungsten Fabric没有此类问题,因为它不会尝试解析下一跳。

    3.4 隧道(Tunnel)

    Tunnel service是必须要启用的。这里有一个示例。

    set chassis fpc 0 pic 0 tunnel-services bandwidth 1g
    

    3.4.1 MPLSoGRE隧道

    对于L3VPN,在BGP收到INET-VPN路由并将其放在表bgp.l3vpn.0中之后,它将为该路由寻找MPLS路径。BGP尝试解析表inet.3中的路由。如果成功,将创建GRE隧道并在inet.3中添加MPLS路由。否则,该路由将会被隐藏在bgp.l3vpn.0中。

    在启用隧道后,destination-networks的路由将被添加到inet.3中。这里是一个示例。

    set routing-options dynamic-tunnels contrail source-address 10.6.0.31set routing-options dynamic-tunnels contrail greset routing-options dynamic-tunnels contrail destination-networks 10.6.11.0/24
    

    source-address is the loopback address.

    这是表inet.3中GRE隧道路由的示例。

    10.6.11.4/32 (1 entry, 1 announced)
            *Tunnel Preference: 300
                    Next hop type: Router, Next hop index: 0
                    Address: 0xd7a9210
                    Next-hop reference count: 3
                    Next hop: via gr-0/0/0.32769, selected
                    Session Id: 0x0
                    State: 
                    Local AS: 64031 
                    Age: 10 
                    Validation State: unverified 
                    Task: DYN_TUNNEL
                    Announcement bits (2): 0-Resolve tree 1 1-Resolve_IGP_FRR task 
                    AS path: I
    

    这是动态隧道数据库。

    > show dynamic-tunnels database    
    *- Signal Tunnels #- PFE-down
    Table: inet.3       
    
    Destination-network: 10.6.11.0/24
    Tunnel to: 10.6.11.1/32 State: Up (expires in 00:06:58 seconds)
      Reference count: 0
      Next-hop type: gre
        Source address: 10.6.0.31
        Next hop: gr-0/0/10.32769
          State: Up
    Tunnel to: 10.6.11.7/32 State: Up
      Reference count: 2
      Next-hop type: gre
        Source address: 10.6.0.31
        Next hop: gr-0/0/10.32770
          State: Up
    

    3.4.2 MPLSoUDP Tunnel

    UDP隧道更适合于负载均衡。

    set routing-options dynamic-tunnels contrail source-address 10.6.0.31
    set routing-options dynamic-tunnels contrail udp
    set routing-options dynamic-tunnels contrail destination-networks 10.6.11.0/24
    

    这是表inet.3中UDP隧道路由的示例。

    10.6.11.4/32 (1 entry, 1 announced)
            *Tunnel Preference: 300
                    Next hop type: Tunnel Composite, Next hop index: 0
                    Address: 0xd7a87f0
                    Next-hop reference count: 2
                    Tunnel type: UDP, Reference count: 5, nhid: 0
                    Destination address: 10.6.11.4, Source address: 10.6.0.31
                    State: 
                    Local AS: 64031 
                    Age: 24:46 
                    Validation State: unverified 
                    Task: DYN_TUNNEL
                    Announcement bits (2): 0-Resolve tree 1 1-Resolve_IGP_FRR task 
                    AS path: I
    

    当路由从VRF导出到Tungsten Fabric时,需要添加策略(policy)来附加到封装属性(community)。

    set policy-options policy-statement vrf-export-provider-1 term t1 then community add provider-1
    set policy-options policy-statement vrf-export-provider-1 term t1 then community add encap-udp
    set policy-options policy-statement vrf-export-provider-1 term t1 then accept
    set policy-options community provider-1 members target:64512:101
    set policy-options community encap-udp members encapsulation:64512:13
    

    3.5 Routing Instance

    3.5.1 VRF

    RI的vrf类型用于保留L3路由。

    set routing-instances provider-1 instance-type vrf
    set routing-instances provider-1 interface lo0.11
    set routing-instances provider-1 route-distinguisher 64512:101
    set routing-instances provider-1 vrf-target target:64512:101;
    set routing-instances provider-1 vrf-table-label
    

    3.5.2 虚拟交换机

    (略)

    4 路由导入/导出

    4.1 工作流

    4.1.1 导入(Import)

    • 首先,BGP与Tungsten Fabric建立对等关系。如果没有任何VRF RI和导入策略,则不会创建表bgp.l3vpn.0,并且BGP无法接收任何INET-VPN路由。

    • 在创建VRF RI后(必须配置vrf-table-label),可以使用隐式策略(implicit policy)或显式策略(explicit policy)。

    • 配置vrf-target将启用隐式策略,该策略将导入具有特定RT community的路由,并导出具有附加特定RT community的路由。
    • 配置“vrf-import”和“vrf-export”以指定显式策略,以备需要任何其它的操作。
    • 使用任何VRF RI和导入策略,将创建表bgp.l3vpn.0。

    • 根据导入策略,为每个RT创建一个RIB组vpn-unicast。

    vpn-unicast target:64512:101, Address: 0xd7a8e40
      Address Family: l3vpn, Flags: 0x4, References: 0
      Export RIB: l3vpn.0
      Import RIB: bgp.l3vpn.0
      Secondary Import RIB: provider-1.inet.0
    
    • BGP尝试解析表inet.3中的路由。如果成功,则分配GRE隧道。否则,该路由将被隐藏。

    • BGP接收到与导入策略匹配的INET-VPN路由(route-target community),并将其放在表bgp.l3vpn.0中。路由也转换为INET路由,并放置在VRF表中,该表是RIB组中的辅助导入RIB。否则,路由将被丢弃。

    这是表bgp.l3vpn.0中的INET-VPN路由示例。它是由BGP从Tungsten Fabric上通告的;路由标识符10.6.11.4:2由vRouter的IP地址和vRouter分配的ID组成;从Tungsten Fabric控制节点10.6.11.1发布;下一跳是通过动态GRE隧道接口gr-0/0/0.32769;MPLS标签为25。

    10.6.11.4:2:172.16.11.3/32                
                       *[BGP/170] 00:03:11, MED 100, localpref 100, from 10.6.11.1
                          AS path: 64512 ?, validation-state: unverified
                        > via gr-0/0/0.32769, Push 25
    

    该路由将转换为INET路由并放置在VRF中。

    172.16.11.3/32     *[BGP/170] 02:35:37, MED 100, localpref 100, from 10.6.11.1
                          AS path: 64512 ?, validation-state: unverified
                        > via gr-0/0/0.32769, Push 25
    

    4.1.2 导出(Export)

    • 要从VRF导出路由,根据导出策略,该路由将从INET转换为INET-VPN,放入表bgp.l3vpn.0中,然后由BGP导出。MPLS标签将分配给在表mpls.0中的INET-VPN路由。

    这是VRF中的环回接口,如表bgp.l3vpn.0所示。

    64512:101:172.16.11.250/32                
                       *[Direct/0] 00:43:14
                        > via lo0.11
    

    The route is advertised with MPLS label 300624 showing by "show route advertising-protocol bgp 10.6.11.1 detail".

    该路由用MPLS标签300624发布,通过 “show route advertising-protocol bgp 10.6.11.1 detail”可以显示细节。

    * 64512:101:172.16.11.250/32 (1 entry, 1 announced)
     BGP group vpn-contrail type External
         Route Distinguisher: 64512:101
         VPN Label: 300624
         Nexthop: Self
         Flags: Nexthop Change
         AS path: [64031] I
    

    MPLS标签在表mpls.0中分配。

    300624             *[VPN/170] 00:55:34
                          receive table provider-1.inet.0, Pop
    

    4.2 隐式VRF导入/导出策略

    使用vrf-target,可以创建隐式导入和导出策略。

    set routing-instances provider-1 instance-type vrf
    set routing-instances provider-1 vrf-table-label
    set routing-instances provider-1 vrf-target target:64512:101;
    

    隐式导入策略将导入带有community“target:64540:100”的路由。其结果是,从Tungsten Fabric虚拟网络中发布的带有“target:64540:100”的路由,被导入到此RI中。

    > show policy __vrf-import-5b4s37-166-internal__ 
    Policy __vrf-import-5b4s37-166-internal__:
        Term unnamed:
            from community __vrf-community-5b4s37-166-common-internal__ [target:64540:100 ]
            then accept
        Term unnamed:
            then reject
    

    隐式导出策略将导出带有community“target:64540:100”的路由。其结果是,路由被发布到Tungsten Fabric,并导入到带有“target:64540:100”的虚拟网络中。

    > show policy __vrf-export-5b4s37-166-internal__ 
    Policy __vrf-export-5b4s37-166-internal__:
        Term unnamed:
            then community + __vrf-community-5b4s37-166-common-internal__ [target:64540:100 ] accept
    

    4.3 显式VRF导入/导出策略

    策略可被显式定义为导入和导出路由。在此示例中,从Tungsten Fabric虚拟网络中发布的带有“target:64540:91”和“target:64540:92”的路由被导入RI。RI中的路由使用“target:64540:91”和“target:64540:92”进行通告,并导入到两个虚拟网络中。

    set policy-options policy-statement provider-1-export term t1 then community add provider-1
    set policy-options policy-statement provider-1-export term t1 then accept
    set policy-options policy-statement provider-1-import term t1 from community provider-1
    set policy-options policy-statement provider-1-import term t1 from community ext-host
    set policy-options policy-statement provider-1-import term t1 then accept
    set policy-options community ext-host members target:64510:101
    set policy-options community provider-1 members target:64512:101
    set routing-instances provider-1 instance-type vrf
    set routing-instances provider-1 interface lo0.11
    set routing-instances provider-1 route-distinguisher 64512:101
    set routing-instances provider-1 vrf-table-label
    set routing-instances provider-1 vrf-import provider-1-import
    set routing-instances provider-1 vrf-export provider-1-export
    

    5 External/Underlay连接

    这里想说的是——

    • 在master RI中具有路由,以将ingress流量(从external/underlay到overlay)引导到VRF RI。

    • 在VRF RI中具有路由,以将egress流量(从overlay到external/underlay)引导到master RI。

    • 路由可能泄漏为静态。

    有两个工作选项:

    1. 逻辑隧道(Logical tunnel)

    2. RIB组和带有下一表(next-table)的静态路由

    详细信息请见以下各小节内容。

    5.1 逻辑隧道

    逻辑隧道用于连接master路由实例和VRF路由实例。根据使用情况,这是可选的。由于带宽限制,必须检查需求和特定硬件上的隧道带宽,以此来做出决定。

    5.1.1 静态

    这是在逻辑隧道上使用静态路由的示例。

    set chassis fpc 0 pic 0 tunnel-services
    set interfaces lt-0/0/0 unit 100 encapsulation frame-relay
    set interfaces lt-0/0/0 unit 100 dlci 10
    set interfaces lt-0/0/0 unit 100 peer-unit 200
    set interfaces lt-0/0/0 unit 100 family inet
    set interfaces lt-0/0/0 unit 200 encapsulation frame-relay
    set interfaces lt-0/0/0 unit 200 dlci 10
    set interfaces lt-0/0/0 unit 200 peer-unit 100
    set interfaces lt-0/0/0 unit 200 family inet
    set routing-options static route 172.16.11.0/24 next-hop lt-0/0/0.100
    set routing-instances provider-1 interface lt-0/0/0.200
    set routing-instances provider-1 routing-options static route 0.0.0.0/0 next-hop lt-0/0/0.200
    

    5.1.2 动态

    这里是一个示例,使用聚合路由在VRF和master之间配置BGP对等。

    set chassis fpc 0 pic 0 tunnel-services
    set interfaces lt-0/0/0 unit 100 encapsulation frame-relay
    set interfaces lt-0/0/0 unit 100 dlci 10
    set interfaces lt-0/0/0 unit 100 peer-unit 200
    set interfaces lt-0/0/0 unit 100 family inet address 192.168.200.0/31
    set interfaces lt-0/0/0 unit 200 encapsulation frame-relay
    set interfaces lt-0/0/0 unit 200 dlci 10
    set interfaces lt-0/0/0 unit 200 peer-unit 100
    set interfaces lt-0/0/0 unit 200 family inet address 192.168.200.1/31
    set protocols bgp group vrf type internal
    set protocols bgp group vrf local-address 192.168.200.0
    set protocols bgp group vrf keep all
    set protocols bgp group vrf family inet unicast
    set protocols bgp group vrf export provider-1-export
    set protocols bgp group vrf neighbor 192.168.200.1
    set policy-options policy-statement provider-1-export term t1 then community add provider-1
    set policy-options policy-statement provider-1-export term t1 then accept
    set policy-options policy-statement provider-1-aggregate-export term 1 from protocol aggregate
    set policy-options policy-statement provider-1-aggregate-export term 1 from route-filter 172.16.11.0/24 exact
    set policy-options policy-statement provider-1-aggregate-export term 1 then next-hop self
    set policy-options policy-statement provider-1-aggregate-export term 1 then accept
    set policy-options community provider-1 members target:64512:101
    set routing-instances provider-1 instance-type vrf
    set routing-instances provider-1 interface lt-0/0/0.200
    set routing-instances provider-1 route-distinguisher 64512:101
    set routing-instances provider-1 vrf-import provider-1-import
    set routing-instances provider-1 vrf-export provider-1-export
    set routing-instances provider-1 routing-options aggregate route 172.16.11.0/24
    set routing-instances provider-1 protocols bgp group master type internal
    set routing-instances provider-1 protocols bgp group master local-address 192.168.200.1
    set routing-instances provider-1 protocols bgp group master keep all
    set routing-instances provider-1 protocols bgp group master family inet unicast
    set routing-instances provider-1 protocols bgp group master export provider-1-aggregate-export
    set routing-instances provider-1 protocols bgp group master neighbor 192.168.200.0
    

    5.2 下一表(Next-table)

    可以将路由表指定为路由下一跳。从概念上讲,可以像下面的示例一样,在inet.0和vrf.inet.0之间控制流量。
    90aa13ec-7906-4de8-9333-d9f504574c44-image.png

    该解决方案的问题在于它将导致路由循环。例如,172.16.11.9的流量被导向vrf.inet.0。如果没有任何特定的路由解析,它将通过默认路由返回到inet.0。为了避免这种路由循环,Junos不允许进行这种配置。

    Junos也不允许配置第三张表(the third table)。

    5.3 RIB组

    RIB组通常用于泄漏路由表之间的路由。从概念上讲,可以创建一个RIB组以将INET路由从vrf.inet.0导入到inet.0,同时可以创建另一个RIB组以将INET路由从inet.0导入到vrf.inet.0。

    set routing-options rib-groups provider-1-master import-rib provider-1.inet.0
    set routing-options rib-groups provider-1-master import-rib inet.0
    set routing-options rib-groups master-provider-1 import-rib inet.0
    set routing-options rib-groups master-provider-1 import-rib provider-1.inet.0
    set protocols bgp group corp type external
    set protocols bgp group corp family inet unicast rib-group master-provider-1
    set protocols bgp group corp export direct
    set protocols bgp group corp neighbor 10.6.30.1 peer-as 64041
    set routing-instances provider-1 instance-type vrf
    set routing-instances provider-1 route-distinguisher 64512:101
    set routing-instances provider-1 vrf-import provider-1-import
    set routing-instances provider-1 vrf-export provider-1-export
    set routing-instances provider-1 vrf-table-label
    set routing-instances provider-1 routing-options auto-export family inet unicast rib-group provider-1-master
    

    此配置将路由从inet.0泄漏到vpn.inet.0。但是从另一种角度来看,自Tungsten Fabric接收而来的路由,不会从vpn.inet.0泄漏到inet.0,原因是Junos的设计。这些路由已经从bgp.13vpn.0中泄漏,因此vpn.inet.0是这些路由的辅助RIB。辅助RIB中的路由不会再次泄漏。

    5.4 RIB组和下一表(Next-table)

    5.4.1 Ingress

    对于ingress流量,由于Junos不会泄漏从VRF到master的overlay/32路由,因此有两个选择。

    1. 在VRF中添加生成(聚合)路由,并使用RIB组泄漏从vrf.inet.0到inet.0的聚合路由。
    set routing-options rib-groups provider-1-master import-rib provider-1.inet.0
    set routing-options rib-groups provider-1-master import-rib inet.0
    set routing-options rib-groups provider-1-master import-policy provider-1-master-import
    set routing-instances provider-1 instance-type vrf
    set routing-instances provider-1 route-distinguisher 64512:101
    set routing-instances provider-1 vrf-target target:64512:101
    set routing-instances provider-1 vrf-table-label
    set routing-instances provider-1 routing-options static route 0.0.0.0/0 next-table inet.0
    set routing-instances provider-1 routing-options generate route 172.16.11.0/24 next-table provider-1.inet.0
    set routing-instances provider-1 routing-options auto-export family inet unicast rib-group provider-1-master
    
    1. 将带有下一表(next-table)的静态路由添加到master中的vrf.inet.0。
    set routing-options static route 172.16.11.0/24 next-table provider-1.inet.0
    

    建议使用选项2。

    请注意,需要为路由协议更新导出策略,以通告此类静态路由。

    5.4.2 Egress

    对于egress流量,这里有两个选择。

    1. 将带有下一表(next-table)的静态路由添加到VRF中的inet.0。
    set routing-instances provider-1 routing-options static route 0.0.0.0/0 next-table inet.0
    

    这里的问题是,如果它是如上所述的默认路由,则会导致路由循环。例如,到172.16.11.5/32的ingress流量在vrf.int.0中并不存在,但它将在master和VRF之间循环。使用特定的路由可以避免路由循环,但这不是动态的并且不能扩展。

    1. master中路由协议接收到的路由泄漏到VRF。
    set protocols bgp group corp type external
    set protocols bgp group corp family inet unicast rib-group bgp-corp-provider-1
    set protocols bgp group corp export direct
    set protocols bgp group corp neighbor 10.6.30.1 peer-as 64041
    set routing-options rib-groups bgp-corp-provider-1 import-rib inet.0
    set routing-options rib-groups bgp-corp-provider-1 import-rib provider-1.inet.0
    

    同样,由于Junos的限制,泄漏到VRF(辅助RIB)中的路由无法发布给Tungsten Fabric。解决方案是添加默认拒绝路由。

    set routing-instances provider-1 routing-options static route 0.0.0.0/0 reject
    

    5.4.3 解决方案

    作为结论,这里是解决方案。

    • 从mater泄漏路由到VRF,用于egress流量。

    • 在master中添加静态路由,用于ingress流量。

    附录A.1是完整的配置。

    请注意,这不适用于MPLSoUDP。

    5.5转发过滤器和下一表(Next-table)

    此解决方案是,使用转发过滤器(forwarding filter)将ingress流量引导到VRF RI,并使用带有下一表(next-table)的静态路由将egress流量引导到master RI。

    该解决方案有两个问题。

    1. 由于Junos中的某些问题,它不适用于MPLSoUDP。
    2. 要向外部发布路由,必须添加指向网关本身的路由。Ingress流量将首先到达过滤器,因此静态路由仅用于通告目的,对流量没有影响。

    5.6 VRF到VRF

    附录A.2是一个示例配置。

    请注意,由于Family route-target,在Tungsten Fabric中,对于暴露的VN,必须将远程VRF RT配置为导入RT。否则,网关将不会从远程VRF发布INET-VPN路由。

    5.7 Community

    Tungsten Fabric中的路由有以下的community。

    • route target
    • encapsulation
    • mac-mobility
    • 0x8004 (security group)
    • 0x8071 (origin VN)

    根据使用情况(例如去往外部集群或另一个Tungsten Fabric集群的路由),这些community可能需要清理,也可能不需要。

    附录A.2中的配置是清理community的一个示例。

    6 多集群

    单个网关可以支持多个集群,它们本应该具有不同的ASN。

    • 网关配置ASN。
    • 集群具有不同的专用ASN。
    • 每个集群内控制节点内的iBGP。
    • 每个集群的网关和控制节点之间的eBGP。
    • 多个BGP组可以共享连接到不同邻居组的同一接口。
    • 如果每个集群都位于单独的网络中,则每个集群都有一个动态隧道组。
    • 每个集群应具有单独的公共地址空间。由于没有地址冲突,因此一个VRF路由实例可以由多个集群共享,并且所有集群中的公共虚拟网络必须具有相同的路由目标(routing target)。结果,来自一个集群的公共路由将泄漏到另一个集群。

    附录

    A.1 RIB组和下一表(Next-table)

    set version 18.3R1.9
    set chassis fpc 0 pic 0 tunnel-services
    set interfaces ge-0/0/0 mac 52:54:00:8c:f9:2b
    set interfaces ge-0/0/0 unit 0 family inet address 10.6.30.2/30
    set interfaces ge-0/0/1 mac 52:54:00:c4:ee:41
    set interfaces ge-0/0/1 unit 0 family inet address 10.6.20.1/30
    set interfaces fxp0 unit 0 family inet address 10.6.8.31/24
    set interfaces lo0 unit 0 family inet address 10.6.0.31/32
    set interfaces lo0 unit 11 family inet address 172.16.11.250/32
    set interfaces lo0 unit 12 family inet address 172.16.12.250/32
    set routing-options interface-routes rib-group inet master-direct-vrf
    set routing-options static route 172.16.11.0/24 next-table provider-1.inet.0
    set routing-options static route 172.16.12.0/24 next-table provider-2.inet.0
    set routing-options rib-groups bgp-corp-vrf import-rib inet.0
    set routing-options rib-groups bgp-corp-vrf import-rib provider-1.inet.0
    set routing-options rib-groups bgp-corp-vrf import-rib provider-2.inet.0
    set routing-options rib-groups master-direct-vrf import-rib inet.0
    set routing-options rib-groups master-direct-vrf import-rib provider-1.inet.0
    set routing-options rib-groups master-direct-vrf import-rib provider-2.inet.0
    set routing-options rib-groups master-direct-vrf import-policy rib-import-master-vrf
    set routing-options route-distinguisher-id 10.6.0.31
    set routing-options autonomous-system 64031
    set routing-options dynamic-tunnels contrail source-address 10.6.0.31
    set routing-options dynamic-tunnels contrail gre
    set routing-options dynamic-tunnels contrail destination-networks 10.6.11.0/24
    set protocols bgp group corp type external
    set protocols bgp group corp family inet unicast rib-group bgp-corp-vrf
    set protocols bgp group corp export direct
    set protocols bgp group corp neighbor 10.6.30.1 peer-as 64041
    set protocols bgp group fabric type external
    set protocols bgp group fabric family inet unicast
    set protocols bgp group fabric export direct
    set protocols bgp group fabric neighbor 10.6.20.2 peer-as 64011
    set protocols bgp group vpn-contrail type external
    set protocols bgp group vpn-contrail multihop
    set protocols bgp group vpn-contrail local-address 10.6.0.31
    set protocols bgp group vpn-contrail keep all
    set protocols bgp group vpn-contrail family inet-vpn unicast
    set protocols bgp group vpn-contrail family route-target
    set protocols bgp group vpn-contrail neighbor 10.6.11.1 peer-as 64512
    set policy-options policy-statement direct term t1 from protocol direct
    set policy-options policy-statement direct term t1 from protocol aggregate
    set policy-options policy-statement direct term t1 then accept
    set policy-options policy-statement direct term t2 from protocol static
    set policy-options policy-statement direct term t2 from route-filter 172.16.11.0/24 exact
    set policy-options policy-statement direct term t2 then accept
    set policy-options policy-statement direct term t3 from protocol static
    set policy-options policy-statement direct term t3 from route-filter 172.16.12.0/24 exact
    set policy-options policy-statement direct term t3 then accept
    set policy-options policy-statement rib-import-master-vrf term t2 from protocol direct
    set policy-options policy-statement rib-import-master-vrf term t2 then accept
    set policy-options policy-statement rib-import-master-vrf term end then reject
    set policy-options policy-statement vrf-export-provider-1 term t1 then community add provider-1
    set policy-options policy-statement vrf-export-provider-1 term t1 then accept
    set policy-options policy-statement vrf-export-provider-1 term end then reject
    set policy-options policy-statement vrf-export-provider-2 term t1 then community add provider-2
    set policy-options policy-statement vrf-export-provider-2 term t1 then accept
    set policy-options policy-statement vrf-export-provider-2 term end then reject
    set policy-options policy-statement vrf-import-provider-1 term t1 from community provider-1
    set policy-options policy-statement vrf-import-provider-1 term t1 from community ext-host
    set policy-options policy-statement vrf-import-provider-1 term t1 then accept
    set policy-options policy-statement vrf-import-provider-1 term end then reject
    set policy-options policy-statement vrf-import-provider-2 term t1 from community provider-2
    set policy-options policy-statement vrf-import-provider-2 term t1 from community ext-host
    set policy-options policy-statement vrf-import-provider-2 term t1 then accept
    set policy-options policy-statement vrf-import-provider-2 term end then reject
    set policy-options community all-encaps members encapsulation:*:*
    set policy-options community all-origin-vns members 0x8071:*:*
    set policy-options community all-security-groups members 0x8004:*:*
    set policy-options community encap-udp members encapsulation:64512:13
    set policy-options community ext-host members target:64510:101
    set policy-options community provider-1 members target:64512:101
    set policy-options community provider-2 members target:64512:102
    set routing-instances provider-1 instance-type vrf
    set routing-instances provider-1 interface lo0.11
    set routing-instances provider-1 route-distinguisher 64512:101
    set routing-instances provider-1 vrf-import vrf-import-provider-1
    set routing-instances provider-1 vrf-export vrf-export-provider-1
    set routing-instances provider-1 vrf-table-label
    set routing-instances provider-1 routing-options static route 0.0.0.0/0 reject
    set routing-instances provider-2 instance-type vrf
    set routing-instances provider-2 interface lo0.12
    set routing-instances provider-2 route-distinguisher 64512:102
    set routing-instances provider-2 vrf-import vrf-import-provider-2
    set routing-instances provider-2 vrf-export vrf-export-provider-2
    set routing-instances provider-2 vrf-table-label
    set routing-instances provider-2 routing-options static route 0.0.0.0/0 reject
    

    A.2 VRF到VRF

    set version 18.3R1.9
    set chassis fpc 0 pic 0 tunnel-services
    set interfaces ge-0/0/0 mac 52:54:00:8c:f9:2b
    set interfaces ge-0/0/0 unit 0 family inet address 10.6.30.2/30
    set interfaces ge-0/0/1 mac 52:54:00:c4:ee:41
    set interfaces ge-0/0/1 unit 0 family inet address 10.6.20.1/30
    set interfaces fxp0 unit 0 family inet address 10.6.8.31/24
    set interfaces lo0 unit 0 family inet address 10.6.0.31/32
    set routing-options route-distinguisher-id 10.6.0.31
    set routing-options autonomous-system 64031
    set routing-options dynamic-tunnels contrail source-address 10.6.0.31
    set routing-options dynamic-tunnels contrail gre
    set routing-options dynamic-tunnels contrail destination-networks 10.6.11.0/24
    set routing-options dynamic-tunnels contrail destination-networks 10.6.0.0/16
    set protocols bgp group corp type external
    set protocols bgp group corp family inet unicast
    set protocols bgp group corp export direct
    set protocols bgp group corp neighbor 10.6.30.1 peer-as 64041
    set protocols bgp group fabric type external
    set protocols bgp group fabric family inet unicast
    set protocols bgp group fabric export direct
    set protocols bgp group fabric neighbor 10.6.20.2 peer-as 64011
    set protocols bgp group vpn-contrail type external
    set protocols bgp group vpn-contrail multihop
    set protocols bgp group vpn-contrail local-address 10.6.0.31
    set protocols bgp group vpn-contrail keep all
    set protocols bgp group vpn-contrail family inet-vpn unicast
    set protocols bgp group vpn-contrail family route-target
    set protocols bgp group vpn-contrail neighbor 10.6.11.1 peer-as 64512
    set protocols bgp group vpn-external type external
    set protocols bgp group vpn-external multihop
    set protocols bgp group vpn-external local-address 10.6.0.31
    set protocols bgp group vpn-external keep all
    set protocols bgp group vpn-external family inet-vpn unicast
    set protocols bgp group vpn-external family route-target
    set protocols bgp group vpn-external export vpn-external-export
    set protocols bgp group vpn-external neighbor 10.6.0.41 peer-as 64041
    set policy-options policy-statement direct term t1 from protocol direct
    set policy-options policy-statement direct term t1 then accept
    set policy-options policy-statement provider-1-export term t1 then accept
    set policy-options policy-statement provider-1-import term t1 from community provider-1
    set policy-options policy-statement provider-1-import term t1 from community ext-host
    set policy-options policy-statement provider-1-import term t1 then accept
    set policy-options policy-statement vpn-external-export term t1 from community provider-1
    set policy-options policy-statement vpn-external-export term t1 then community add ext-host
    set policy-options policy-statement vpn-external-export term t1 then community delete all-encaps
    set policy-options policy-statement vpn-external-export term t1 then community delete all-security-groups
    set policy-options policy-statement vpn-external-export term t1 then community delete all-origin-vns
    set policy-options policy-statement vpn-external-export term t1 then accept
    set policy-options community all-encaps members encapsulation:*:*
    set policy-options community all-origin-vns members 0x8071:*:*
    set policy-options community all-security-groups members 0x8004:*:*
    set policy-options community ext-host members target:64510:101
    set policy-options community provider-1 members target:64512:101
    set firewall family inet filter to-vrf term 1 from destination-address 172.16.11.0/24
    set firewall family inet filter to-vrf term 1 then routing-instance provider-1
    set firewall family inet filter to-vrf term default then accept
    set routing-instances provider-1 instance-type vrf
    set routing-instances provider-1 route-distinguisher 64512:101
    set routing-instances provider-1 vrf-import provider-1-import
    set routing-instances provider-1 vrf-export provider-1-export
    
    posted in 博客
  • TF中文社区技术委员会第九次线上会议--邀请+会议记录

    邀请——

    我们于6月10日(星期三)下午13:00 - 14:00召开第九次线上会议。

    参与方式:Zoom 会议

    本次会议的主要议题是:TF release提交讨论,诚邀出席。国际社区代表Edward Ting参会,分享国际社区进展,并将把release进行提交。

    Tungsten Fabric项目是一个开源项目协议,它基于标准协议开发,并且提供网络虚拟化和网络安全所必需的所有组件。项目的组件包括:SDN控制器,虚拟路由器,分析引擎,北向API的发布,硬件集成功能,云编排软件和广泛的REST API。

    会议记录——

    TF中文社区技术委员会第九次线上会议
    时间:2020年6月10日
    主题:国际社区代表Edward Ting参会,分享国际社区进展,并将把release进行提交。
    会议记录:
    https://tungstenfabric.org.cn/wiki/pages/viewpage.action?pageId=4423704

    资料链接:https://pan.baidu.com/s/1yxBdv89rTUzLrWVPbVxgAA
    提取码:slj9

    Edward Ting分享的一些学习资料
    http://r.lfnetworking.org/?prefix=lfn-zoom/TF/TSC/2020-06-04 10.04.06 TF Weekly TSC Meeting 126834756/
    https://www.juniper.net/documentation/product/en_US/contrail-networking
    https://www.juniper.net/documentation/en_US/contrail20/information-products/pathway-pages/contrail-service-provider-feature-guide.pdf
    http://configuration-schema-documentation.s3-website-us-west-1.amazonaws.com/R3.2/tutorial_with_rest.html

    posted in 新闻
  • TF技术会议精要:瞻博网络CTO加持,社区迈向云原生

    近日,在Tungsten Fabric社区技术指导委员会(TSC)例会上,对TF社区目前的发展和挑战进行了讨论,瞻博网络等成员计划加大资源支持力度。

    TSC主席Prabhjot Singh Sethi表示,Tungsten Fabric的下一步行动将会基于“云原生(Cloud-Native)”的理念,并希望增加用于service-mesh的功能,扩展数据路径、CNF和SDN解决方案。

    同时,TF社区正致力于解决与Tungsten Fabric的ONAP集成问题,希望运营商将Tungsten Fabric视为一个可替代的SDN控制器选择。

    瞻博网络首席技术官Raj Yavatkar参加了例会,表示对Tungsten Fabric的发展方向感到兴奋,将加快对开源的贡献,并将提供更多资源支持。具体包括:新招聘具有Kubernetes背景的工程师来支持团队,增加新的产品经理支持Cloud Native集成,鼓励瞻博网络工程师为社区做出更多贡献,给予项目合作技术负责人Sukhdev Kapur更多支持。

    另外,瞻博网络将尝试帮助支持新的操作框架(operator-framework),通过工程技术的方式为开源做出贡献,并在确保补丁被集成方面发挥积极作用。

    瞻博网络杰出工程师Sukhdev Kapur表示,未来社区将进行许多更改,并计划提高透明度。

    另一位Linux基金会成员Casey Cain,谈到了确定用户故事、案例研究或部署见解的必要性,以及利用这些材料来开发博客、推广Tungsten Fabric的计划,Casey还将创建一个Wiki页面来跟踪协作问题。

    TSC原文链接:
    https://wiki.tungsten.io/display/TUN/2020-06-04+TSC+Minutes

    posted in 新闻