Skip to main content

Juniper QFX, IP-Fabric and VXLAN – Part 2

See the first part here: Juniper QFX, IP-Fabric and VXLAN – Part 1

 

At last here is the Part 2 of the “Juniper QFX, IP-Fabric and VXLAN” -post. In this post I will show how to configure VXLAN and verify that VXLAN works by showing Multicast, VTEP and general switching outputs. VXLAN configuration is actually quite a breeze after you get the Multicast and IP-Fabric configurations set up. Remember that (currently, as of May 2015) QFX-series does not support VXLAN routing and you would require either MX or EX9200 for that.

Spine-switches do not require any special configuration as the VXLAN is routed L3 traffic from Spine point of view. Spine-switches just forward the traffic per routing rules, and they do not care whether it’s VXLAN traffic or something else. Also using this configuration you do not need any special configuration on the Host, just match the VLAN ID specified on the trunk. See the topology below:

 

VXLAN_logical

Leaf 1 config. The VLAN is added to the trunk towards the server normally. Remember that (currently, as of May 2015) you cannot add normal VLAN and VXLAN gateway VLAN on the same trunk, which is quite unfortunate. We will use loopback interface as the VTEP-source interface (loopbacks must be routed throughout the network). In VLAN configuration we just add VNI-number to the VLAN (VNI is the VXLAN Network Identifier, this identifies the VXLAN networks quite similarly to VLAN-number) and multicast group IP-address (only Multicast VXLAN is supported on the QFX-switches when QFX is acting as a VXLAN-gateway device).

 

Leaf 2 config. Similar config here. Note that the VNI and Multicast group must match for the traffic to work.

 

Next step is verifying that everything works fine. First up are outputs from the Spine-switches. First we verify that the Multicast is working okay by checking that the loopback addresses are registered correctly in the “show pim join” table. Also we check some more specific Multicast outputs to see more detailed information. These are the only things to be checked from the Spine side (apart from routing of course, but that should be okay if you followed the first part of this post).

 

Second up are routing and multicast outputs from the Leaf-switches. First check that the routing for Multicast and VXLAN looks okay, you should see the correct configured multicast group and the loopback addresses. Multicast PIM join table should show the correct group and loopback addresses.

 

Finally the VXLAN outputs from the Leaf-switches. You should now see local and remote (in this example behind vtep.32769) MAC-addresses.  Verify that everything works by pinging from host behind LEAF-1 to host behind LEAF-2.

 

This is it! Now you have a working VXLAN infra over QFX switches. I will not touch VXLAN routing in this post due to QFX-switches not supporting VXLAN routing at the moment. Please ask if you have any questions and or comments about the above!

Share on LinkedIn0Tweet about this on TwitterShare on Google+0Share on Facebook0Print this pageEmail this to someone

2 thoughts on “Juniper QFX, IP-Fabric and VXLAN – Part 2

  1. I was pretty pleased to discover this page. I wanted to thank you for your time for this particularly wonderful read!!
    I definitely savored every part of it and I have you saved to fav to look at new information on your website.

  2. Very nice article concerning clos fabric architecture. Probably usable with other vendors, and create a multi-vendor fabric with largely deployed technologies (bgp).
    But one question concerning servers, in this schema, they was “simple” attached with 10G nic to a single TOR. If my TOR crash, how to restore server connectivity ? LACP with M-LAG and two nics on servers ? Use a loopback, ECMP and BGP on server and create VETP on server side ? (less performances ?)

Leave a Reply

Your email address will not be published. Required fields are marked *