Quantum(Grizzly) L3 agent(OVS)工作流
本博客欢迎转发,但请保留原作者信息
新浪微博:@孔令贤HW;
QQ:363210168
博客地址:http://blog.csdn.net/lynn_kong
内容系本人学习、研究和总结,如有雷同,实属荣幸!
更新日志:
2013.4.10 添加系统逻辑视图
2013.4.11 增加iptables filter和nat表的图示
今天按照longgeek兄弟的Grizzly单节点安装文档,总算了把G版搭建起来,成功执行了创建虚拟机、关联外网IP、ping外网IP、SSH登录虚拟机等关键操作。在F版就一直对Quantum感兴趣,其实在Quantum上层丰富的网络逻辑功能下面,有着另一番景象。今天我就拨开表象,扒开衣服,先来看看Quantum中的l3 agent的裸体。
注:我这里用的是OVS plugin
我的系统中逻辑视图如下:
我的环境中建模如下:
我使用的Quantum命令:
- EXTERNAL_NET_ID=$(quantum net-create external_net1 --router:external=True | awk'/ id / {print $4}')
- SUBNET_ID=$(quantum subnet-create external_net1 182.168.61.0/24 --name=external_subnet1 --allocation-pool start=182.168.61.249,end=182.168.61.253 --gateway_ip 182.168.61.1 --enable_dhcp=False | awk '/ id / {print $4}')
- INTERNAL_NET_ID=$(quantum net-create demo_net1 | awk '/ id / {print $4}')
- DEMO_SUBNET_ID=$(quantum subnet-create demo_net1 10.1.1.0/24 --name=demo_subnet1 --gateway_ip 10.1.1.1 | awk '/ id / {print $4}')
- DEMO_ROUTER_ID=$(quantum router-create demo_router1 | awk '/ id / {print $4}')
- quantum router-interface-add $DEMO_ROUTER_ID $DEMO_SUBNET_ID
- quantum router-gateway-set $DEMO_ROUTER_ID $EXTERNAL_NET_ID
- DEMO_PORT_ID=$(quantum port-create --fixed-ip subnet_id=$DEMO_SUBNET_ID,ip_address=10.1.1.13 demo_net1 | awk '/ id / {print $4}')
- nova keypair-add mykey > mykey.pem
- nova boot myvm --image <image_id> --flavor 2 --key_name mykey --nic port-id=$DEMO_PORT_ID
- quantum floatingip-create external_net1
- quantum floatingip-associate <floatingip_id> $DEMO_PORT_ID
一、初始化
l3 agent初始化主要是清除所有与router相关的设备
获取namespace名称:
[email protected]:/etc/init.d# ip netns list
qdhcp-c4d8b48b-6ff7-43b6-a203-8f1192a16f07
qrouter-afabf77d-ffe4-4ab2-a635-8ad33d58f967
获取以qrouter开头的namespace内的设备:
[email protected]:/etc/init.d# ip netns exec qrouter-afabf77d-ffe4-4ab2-a635-8ad33d58f967 ip -o link list
14: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN\ link/loopback 00:00:00:00:00:00 brd00:00:00:00:00:00
28: qr-aff7e122-3b: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdiscnoqueue state UNKNOWN \ link/etherfa:16:3e:50:1d:0d brd ff:ff:ff:ff:ff:ff
29: qg-282f4d8c-81: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdiscnoqueue state UNKNOWN \ link/etherfa:16:3e:d7:9f:87 brd ff:ff:ff:ff:ff:ff
确定br-int存在(因为qr-XXX端口是在br-int上,所以需要操作br-int删除端口):
[email protected]:/etc/init.d# ip -o link show br-int
10: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN \ link/ether 86:7e:c0:44:ff:42 brdff:ff:ff:ff:ff:ff
删除qr-XXX设备:
ovs-vsctl --timeout=2 -- --if-exists del-port br-int qr-aff7e122-3b
同样的,确定br-ex存在(因为qg-XXX端口是在br-ex上,所以需要操作br-ex删除端口):
[email protected]:/etc/init.d# ip -o link show br-ex
11: br-ex: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue stateUNKNOWN \ link/ether 28:6e:d4:f0:c5:4bbrd ff:ff:ff:ff:ff:ff
删除qg-XXX设备:
ovs-vsctl --timeout=2 -- --if-exists del-port br-ex qg-282f4d8c-81
至此,初始化时与设备相关的操作结束。
二、循环任务
在l3 agent的循环任务中,不断的向Quantum Plugin查询router信息,与上次维护的信息对比,对于新增的router,在br上新增设备(同时需要设置设备的属性,iptables规则等),而对于删除的router,即删除设备(同时删除对应的iptables规则)。下面的命令和输出都是以第一次启动为例。
l3 agent使用时有一些细节需要注意,如果不启动namespace(不能使用IP重叠),一个l3 agent只能处理一个router,此时需要在创建external_net和router后,将router和external_net的ID写进配置文件的router_id选项和gateway_external_network_id选项;而如果启用namespace,则不需要对这两项进行配置。
2.1. 内部port处理
对连接到router的每个内部port,(执行router-interface-add后,quantum会创建一个对应到subnet网关的一个port(device_owner='network:router_interface')),此时生成一个设备名称qr-XXX,后面是port_id,先查看该设备是否已经存在:
[email protected]:/etc/init.d# ip netns exec qrouter-afabf77d-ffe4-4ab2-a635-8ad33d58f967 ip -o link show qr-aff7e122-3b
Device "qr-aff7e122-3b" does not exist.
在br-int上新增端口,同时在OVS的db中设置端口的一些属性:
ovs-vsctl -- --may-exist add-port br-int qr-aff7e122-3b -- set Interface qr-aff7e122-3b type=internal -- set Interface qr-aff7e122-3b external-ids:iface-id=aff7e122-3b0c-4c7b-8e22-dbe63a84dfd6 -- set Interface qr-aff7e122-3b external-ids:iface-status=active -- set Interface qr-aff7e122-3b external-ids:attached-mac=fa:16:3e:50:1d:0d
设置端口的MAC地址:
ip link set qr-aff7e122-3b address fa:16:3e:50:1d:0d
将端口加入namespace:
ip link set qr-aff7e122-3b netns qrouter-afabf77d-ffe4-4ab2-a635-8ad33d58f967
启动端口:
ip netns exec qrouter-afabf77d-ffe4-4ab2-a635-8ad33d58f967 ip link set qr-aff7e122-3b up
设置端口的IP地址
ip netns exec qrouter-afabf77d-ffe4-4ab2-a635-8ad33d58f967 ip addr show qr-aff7e122-3b permanent scope global
ip netns exec qrouter-afabf77d-ffe4-4ab2-a635-8ad33d58f967 ip -4 addr add 10.1.1.1/24brd 10.1.1.255 scope global dev qr-aff7e122-3b
ip netns exec qrouter-afabf77d-ffe4-4ab2-a635-8ad33d58f967 arping -A -U -I qr-aff7e122-3b -c 3 10.1.1.1
设置iptables规则,具体的iptables参见下面的输出
2.2. 外部port处理
同内部port一样,(在进行router-gateway-set后,quantum会创建一个port(device_owner='network:router_gateway')),此时生成一个设备名称qg-XXX,后面是port-id,先查看外部port是否已经存在:
ip netns exec qrouter-afabf77d-ffe4-4ab2-a635-8ad33d58f967 ip -o link show qg-282f4d8c-81
在br-ex上新增端口,同时在OVS的db中设置端口的一些属性
ovs-vsctl -- --may-exist add-port br-ex qg-282f4d8c-81 -- set Interface qg-282f4d8c-81type=internal -- set Interface qg-282f4d8c-81 external-ids:iface-id=282f4d8c-8123-4260-ada2-4c7dd2dbe824 -- set Interface qg-282f4d8c-81 external-ids:iface-status=active -- set Interface qg-282f4d8c-81 external-ids:attached-mac=fa:16:3e:d7:9f:87
设置端口的MAC地址:
ip link set qg-282f4d8c-81 address fa:16:3e:d7:9f:87
将端口加入namespace:
ip link set qg-282f4d8c-81 netns qrouter-afabf77d-ffe4-4ab2-a635-8ad33d58f967
启动端口:
ip netns exec qrouter-afabf77d-ffe4-4ab2-a635-8ad33d58f967 ip link set qg-282f4d8c-81 up
设置router的外网IP:
ip netns exec qrouter-afabf77d-ffe4-4ab2-a635-8ad33d58f967 ip addr show qg-282f4d8c-81 permanent scope global
ip netns exec qrouter-afabf77d-ffe4-4ab2-a635-8ad33d58f967 ip -4 addr add 182.168.61.249/24 brd 182.168.61.255 scope global dev qg-282f4d8c-81
ip netns exec qrouter-afabf77d-ffe4-4ab2-a635-8ad33d58f967 arping -A -U -I qg-282f4d8c-81 -c 3 182.168.61.249
为系统增加一条默认路由,182.168.61.1是外网IP段的网关:
ip netns exec qrouter-afabf77d-ffe4-4ab2-a635-8ad33d58f967 route add default gw 182.168.61.1
设置iptables,具体的iptables参见下面的输出
2.3. 处理floatingip
我的系统中有两个虚拟机(上面的图中只画了一个),内网在IP分别为:10.1.1.11和10.1.1.13,分配了两个floatingip:182.168.61.250和182.168.61.251,分别关联两个虚拟机。
为qg-282f4d8c-81添加一个公网地址:
ip netns exec qrouter-afabf77d-ffe4-4ab2-a635-8ad33d58f967 ip -4 addr add 182.168.61.251/32 brd 182.168.61.251 scope global dev qg-282f4d8c-81
ip netns exec qrouter-afabf77d-ffe4-4ab2-a635-8ad33d58f967 arping -A -U -I qg-282f4d8c-81 -c 3 182.168.61.251
为qg-282f4d8c-81添加第二个公网地址:
ip netns exec qrouter-afabf77d-ffe4-4ab2-a635-8ad33d58f967 ip -4 addr add 182.168.61.250/32 brd 182.168.61.250 scope global dev qg-282f4d8c-81
ip netns exec qrouter-afabf77d-ffe4-4ab2-a635-8ad33d58f967 arping -A -U -I qg-282f4d8c-81 -c 3 182.168.61.250
2.4. iptable规则
以下是第一次循环任务执行后的iptalbes规则,懂iptables的朋友应该一眼就能知道l3 agent都做了什么,在此我就不班门弄斧了。
[email protected]:/etc/init.d# ip netns exec qrouter-afabf77d-ffe4-4ab2-a635-8ad33d58f967 iptables-save
# Generated by iptables-save v1.4.12 on Tue Apr 9 18:19:59 2013
*nat
:PREROUTING ACCEPT [93:28644]
:INPUT ACCEPT [93:28644]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:quantum-l3-agent-OUTPUT - [0:0]
:quantum-l3-agent-POSTROUTING - [0:0]
:quantum-l3-agent-PREROUTING - [0:0]
:quantum-l3-agent-float-snat - [0:0]
:quantum-l3-agent-snat - [0:0]
:quantum-postrouting-bottom - [0:0]
-A PREROUTING -j quantum-l3-agent-PREROUTING
-A OUTPUT -j quantum-l3-agent-OUTPUT
-A POSTROUTING -j quantum-l3-agent-POSTROUTING
-A POSTROUTING -j quantum-postrouting-bottom
-A quantum-l3-agent-OUTPUT -d 182.168.61.251/32 -j DNAT --to-destination10.1.1.13
-A quantum-l3-agent-OUTPUT -d 182.168.61.250/32 -j DNAT --to-destination10.1.1.11
-A quantum-l3-agent-POSTROUTING ! -i qg-282f4d8c-81 ! -o qg-282f4d8c-81 -mconntrack ! --ctstate DNAT -j ACCEPT
-A quantum-l3-agent-PREROUTING -d 169.254.169.254/32 -p tcp -m tcp --dport 80-j REDIRECT --to-ports 9697
-A quantum-l3-agent-PREROUTING -d 182.168.61.251/32 -j DNAT --to-destination10.1.1.13
-A quantum-l3-agent-PREROUTING -d 182.168.61.250/32 -j DNAT --to-destination10.1.1.11
-A quantum-l3-agent-float-snat -s 10.1.1.13/32 -j SNAT --to-source182.168.61.251
-A quantum-l3-agent-float-snat -s 10.1.1.11/32 -j SNAT --to-source182.168.61.250
-A quantum-l3-agent-snat -j quantum-l3-agent-float-snat
-A quantum-l3-agent-snat -s 10.1.1.0/24 -j SNAT --to-source 182.168.61.249
-A quantum-postrouting-bottom -j quantum-l3-agent-snat
COMMIT
# Completed on Tue Apr 9 18:19:59 2013
# Generated by iptables-save v1.4.12 on Tue Apr 9 18:19:59 2013
*filter
:INPUT ACCEPT [100:30818]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:quantum-filter-top - [0:0]
:quantum-l3-agent-FORWARD - [0:0]
:quantum-l3-agent-INPUT - [0:0]
:quantum-l3-agent-OUTPUT - [0:0]
:quantum-l3-agent-local - [0:0]
-A INPUT -j quantum-l3-agent-INPUT
-A FORWARD -j quantum-filter-top
-A FORWARD -j quantum-l3-agent-FORWARD
-A OUTPUT -j quantum-filter-top
-A OUTPUT -j quantum-l3-agent-OUTPUT
-A quantum-filter-top -j quantum-l3-agent-local
-A quantum-l3-agent-INPUT -d 127.0.0.1/32 -p tcp -m tcp --dport 9697 -j ACCEPT
COMMIT
# Completed on Tue Apr 9 18:19:59 2013
下面图示更能清晰说明系统中的iptables filter表和nat表:
继上一篇《Quantum(Grizzly) L3 agent工作流》,本篇介绍采用OVS(gre)实现的quantum agent工作流,系统环境参见上一篇博客。
因为agent主要的工作是操作vSwitch,所以本篇的命令居多,关于OpenvSwitch的命令使用,请参考官网。
一、系统对象
以下是我的环境中的虚拟机和网络信息,这要关注一下port的id,因为后面的命令都是基于port的信息来做的。
[email protected]:~# nova list
- +--------------------------------------+--------+--------+-------------------------------------+
- | ID |Name | Status | Networks |
- +--------------------------------------+--------+--------+-------------------------------------+
- | 50ab650a-289c-4b84-b9f6-9e6c93516a4b | cirros | ACTIVE | demo_net1=10.1.1.13,182.168.61.250 |
- +--------------------------------------+--------+--------+-------------------------------------+
[email protected]:~# quantum net-list
- +--------------------------------------+---------------+------------------------------------------------------+
- | id |name | subnets |
- +--------------------------------------+---------------+------------------------------------------------------+
- | c4d8b48b-6ff7-43b6-a203-8f1192a16f07 | demo_net1 | 1e18074b-2ad1-4306-a4f7-c39e6762295610.1.1.0/24 |
- | e7bb2f41-4f2a-4dbf-a701-7630cfd72de5 | external_net1 |3d6037f5-74e9-4f9f-9c07-3cd8b7b69a46 182.168.61.0/24 |
- +--------------------------------------+---------------+------------------------------------------------------+
[email protected]:~# quantum port-list
- +--------------------------------------+------+-------------------+---------------------------------------------------------------------------------------+
- | id |name | mac_address | fixed_ips |
- +--------------------------------------+------+-------------------+---------------------------------------------------------------------------------------+
- | 0e2dfa90-d3c8-4938-b35f-e85ed71d0270 | | fa:16:3e:2a:6e:1c | {"subnet_id":"1e18074b-2ad1-4306-a4f7-c39e67622956", "ip_address":"10.1.1.1"} |
- | 1e6720f1-8c3f-46c5-8313-72b0753037f8 | | fa:16:3e:30:de:88 | {"subnet_id":"3d6037f5-74e9-4f9f-9c07-3cd8b7b69a46", "ip_address":"182.168.61.249"} |
- | 3f1f785c-7015-46c4-95ba-9efd6cd323d0 | | fa:16:3e:b6:fc:21 | {"subnet_id":"1e18074b-2ad1-4306-a4f7-c39e67622956", "ip_address":"10.1.1.12"} |
- | 45772766-6c9e-431c-9c04-0365d91b6ae4 | | fa:16:3e:73:a2:59 | {"subnet_id":"3d6037f5-74e9-4f9f-9c07-3cd8b7b69a46", "ip_address":"182.168.61.250"} |
- | 99f91280-a060-442e-90b0-d8324e50efc8 | | fa:16:3e:02:dd:79 | {"subnet_id":"1e18074b-2ad1-4306-a4f7-c39e67622956", "ip_address":"10.1.1.13"} |
- +--------------------------------------+------+-------------------+---------------------------------------------------------------------------------------+
[email protected]:~# quantum router-list
- +--------------------------------------+--------------+--------------------------------------------------------+
- | id |name | external_gateway_info |
- +--------------------------------------+--------------+--------------------------------------------------------+
- | 0a23bd10-932e-435a-a673-0e508f0d56b9 | demo_router1 |{"network_id": "e7bb2f41-4f2a-4dbf-a701-7630cfd72de5"} |
- +--------------------------------------+--------------+--------------------------------------------------------+
二、ovs agent初始化
与agent工作相关的两个vSwitch分别是br-int和br-tun,所以初始化主要是围绕这两个设备进行。
先删除br-int上的patch-tun端口,同时删除所有的flow:
- ovs-vsctl --timeout=2 -- --if-exists del-port br-int patch-tun
- ovs-ofctl del-flows br-int
- ovs-ofctl add-flow br-int hard_timeout=0,idle_timeout=0,priority=1,actions=normal
然后是对br-tun的处理以及关联br-int和br-tun:
- ovs-vsctl --timeout=2 -- --if-exists del-br br-tun
- ovs-vsctl --timeout=2 add-br br-tun
- ovs-vsctl --timeout=2 add-port br-int patch-tun
- ovs-vsctl --timeout=2 set Interface patch-tun type=patch
- ovs-vsctl --timeout=2 set Interface patch-tun options:peer=patch-int
- [email protected]:~# ovs-vsctl --timeout=2 get Interface patch-tun ofport
- 6
- ovs-vsctl --timeout=2 add-port br-tun patch-int
- ovs-vsctl --timeout=2 set Interface patch-int type=patch
- ovs-vsctl --timeout=2 set Interface patch-int options:peer=patch-tun
- [email protected]:~# ovs-vsctl --timeout=2 get Interface patch-int ofport
- 1
- ovs-ofctl del-flows br-tun
- ovs-ofctl add-flow br-tun hard_timeout=0,idle_timeout=0,priority=1,actions=drop
三、ovs agent循环任务
3.1. tunnel同步
因为我采用的是gre模式,所以在循环任务执行前,会向plugin注册本机tunnel,同时会收到plugin推送来的系统中所有tunnel的信息,对每一个tunnel(本地tunnel除外)有如下操作(因为我的系统中只有一个节点,所以不涉及下面的命令,这里列出是为了说明gre类型agent的机制):
- ovs-vsctl --timeout=2 add-port br-tun <gre-tunnel_id>
- ovs-vsctl --timeout=2 set Interface <gre-tunnel_id> type=gre
- ovs-vsctl --timeout=2 set Interface <gre-tunnel_id> options:remote_ip=<remote_ip>
- ovs-vsctl --timeout=2 set Interface <gre-tunnel_id> options:in_key=flow
- ovs-vsctl --timeout=2 set Interface <gre-tunnel_id> options:out_key=flow
这样,系统中的不同节点之间就能通过gre通道互通。
3.2. 获取br-int上的设备
循环任务开始,对于刚启动的agent来说,需要处理br-int上的每一个端口,所以需要先列出br-int上的端口,以下是我的环境中的命令及输出:
[email protected]:~# ovs-vsctl --timeout=2 list-ports br-int
patch-tun
qr-0e2dfa90-d3
qvo99f91280-a0
tap3f1f785c-70
上面的patch-tun是agent初始化时在br-int上创建;qr-0e2dfa90-d3是虚拟机的网关设备;qvo99f91280-a0是与虚拟机网卡相连的设备;tap3f1f785c-70是dhcp设备;
查询每一个设备的信息:
- [email protected]:~# ovs-vsctl --timeout=2 get Interface patch-tun external_ids
- {}
- [email protected]:~# ovs-vsctl --timeout=2 get Interface qr-0e2dfa90-d3 external_ids
- {attached-mac="fa:16:3e:2a:6e:1c",iface-id="0e2dfa90-d3c8-4938-b35f-e85ed71d0270", iface-status=active}
- [email protected]:~# ovs-vsctl --timeout=2 get Interface qvo99f91280-a0 external_ids
- {attached-mac="fa:16:3e:02:dd:79",iface-id="99f91280-a060-442e-90b0-d8324e50efc8", iface-status=active,vm-uuid="50ab650a-289c-4b84-b9f6-9e6c93516a4b"}
- [email protected]:~# ovs-vsctl --timeout=2 get Interface tap3f1f785c-70 external_ids
- {attached-mac="fa:16:3e:b6:fc:21",iface-id="3f1f785c-7015-46c4-95ba-9efd6cd323d0", iface-status=active}
注意,上面iface-id中记录了设备对应的port的id。
3.3. 设置安全组规则
本篇博客暂不关注安全组,如对安全组有兴趣,请继续关注我后续的博客,故此步略去。
3.4. 循环处理port
1. 对每一个port循环(由上述iface-id得到port-id),这里先处理第一个port,即port-id=3f1f785c-7015-46c4-95ba-9efd6cd323d0,查询该设备在br-int上的属性:
[email protected]:~# ovs-vsctl --timeout=2 -- --columns=external_ids,name,ofport find Interface external_ids:iface-id="3f1f785c-7015-46c4-95ba-9efd6cd323d0"
external_ids :{attached-mac="fa:16:3e:b6:fc:21",iface-id="3f1f785c-7015-46c4-95ba-9efd6cd323d0", iface-status=active}
name : "tap3f1f785c-70"
ofport : 2
2. 对于port所属的network,需要做下面的操作(若已处理过该network,忽略此步):
- ovs-ofctl add-flow br-tun hard_timeout=0,idle_timeout=0,priority=4,in_port=1,dl_vlan=1,actions=set_tunnel:2,normal
- 解释:
- in_port:patch-int端口的ofport号,参见上面的命令输出
- dl_vlan:系统分配的内部vlan号,用以识别不同的逻辑network
- set_tunnel:2:这里的2指plugin分配的tunnel号
- ovs-ofctl add-flow br-tun hard_timeout=0,idle_timeout=0,priority=3,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00,tun_id=2,actions=mod_vlan_vid:1,output:1
- 其中的2,1,1分别是plugin分配的tunnel号、系统分配的内部vlan号、patch-int端口的ofport号
3. 对port的设备执行下面命令
ovs-ofctl add-flow br-tun hard_timeout=0,idle_timeout=0,priority=3,dl_dst=fa:16:3e:b6:fc:21,tun_id=2,actions=mod_vlan_vid:1,normal
ovs-vsctl --timeout=2 set Port tap3f1f785c-70 tag=1
解释:
tag=1:系统分配的内部vlan号
ovs-ofctl del-flows br-int in_port=2
解释:
in_port=2:tap3f1f785c-70的ofport号
4. 循环执行上述的步骤1和步骤2,处理其他port
- [email protected]:~# ovs-vsctl --timeout=2 -- --columns=external_ids,name,ofport find Interface external_ids:iface-id="99f91280-a060-442e-90b0-d8324e50efc8"
- external_ids :{attached-mac="fa:16:3e:02:dd:79", iface-id="99f91280-a060-442e-90b0-d8324e50efc8",iface-status=active, vm-uuid="50ab650a-289c-4b84-b9f6-9e6c93516a4b"}
- name :"qvo99f91280-a0"
- ofport : 5
- ovs-ofctl add-flow br-tun hard_timeout=0,idle_timeout=0,priority=3,dl_dst=fa:16:3e:02:dd:79,tun_id=2,actions=mod_vlan_vid:1,normal
- ovs-vsctl --timeout=2 set Port qvo99f91280-a0 tag=1
- ovs-ofctl del-flows br-int in_port=5
- [email protected]:~# ovs-vsctl --timeout=2 -- --columns=external_ids,name,ofport find Interface external_ids:iface-id="0e2dfa90-d3c8-4938-b35f-e85ed71d0270"
- external_ids :{attached-mac="fa:16:3e:2a:6e:1c",iface-id="0e2dfa90-d3c8-4938-b35f-e85ed71d0270", iface-status=active}
- name :"qr-0e2dfa90-d3"
- ofport : 1
- ovs-ofctl add-flow br-tun hard_timeout=0,idle_timeout=0,priority=3,dl_dst=fa:16:3e:2a:6e:1c,tun_id=2,actions=mod_vlan_vid:1,normal
- ovs-vsctl --timeout=2 set Port qr-0e2dfa90-d3 tag=1
- ovs-ofctl del-flows br-int in_port=1
四、vlan模型图(quntum agent+dhcp agent+l3 agent)
上面的命令是使用ovs gre模式的命令,而如果使用ovs vlan模式,命令类似,只是操作的不再是br-tun,而是每一个network对应的物理br,下面的几张图应该能清晰的说明问题。
逻辑模型:
物理节点上quantum agent的实现:
物理节点上dhcp agent和l3 agent的实现,也可以参考上一篇博客:
如何实现namespace隔离: