Skip to main content

Cisco Nexus 9300 – VXLAN with BGP EVPN Control Plane – Part 1

For the last few weeks I have been configuring, testing and taking new Cisco Nexus 9300 (Nexus 9000) platform with VXLAN and BGP EVPN control plane into use. It proved to be somewhat challenging due to documentation and user experiences being so sparse. Especially as some posts, configuration guides and documentation seems to tell to do things differently. There is no clear explanation on why they’ve done it that way or another. So I decided to make this post to clear things up, and as always, if you have questions or agree/disagree on something, please comment below. Also note that this post is more a configuration guide than VXLAN (or BGP EVPN) introduction, Google and Cisco documentation can help with that. Part 2 will introduce the DCI (Data Center Interconnect) and how to implement that with VXLAN and BGP EVPN.

Two important notes before we begin:

  • If you use BGP as ingress-replication protocol, then you do not need any Multicast config!
  • Also note that the configuration below is using eBGP (iBGP configuration is quite different)! 

The infra is built with the following specs and software:

  • Spines: Cisco Nexus 9332PQ
  • Leafs: Cisco Nexus 9372PX
  • All switches are running the 7.0(3)I1(3) software (latest as of 3.9.2015)

Topology overview (DCI will be implemented in Part 2):

Topology overview

Topology in more detail:

Detailed topology

Read More

Using Expect and TCL scripts to gather device configurations

From time to time I have to gather device configurations for storing them remotely or parsing using different scripts. Expect and TCL scripts are very handy as you can automate these configuration backups using just a short simple script. TCL and Expect are quite powerful, so you can do much more with these, this is just a little example on how useful these really are.

First you have to install TCL and Expect. Below you can see how to install these on CentOS 6.6 using Yum, which is really straightforward.

Read More

Juniper QFX, IP-Fabric and VXLAN – Part 2

See the first part here: Juniper QFX, IP-Fabric and VXLAN – Part 1

 

At last here is the Part 2 of the “Juniper QFX, IP-Fabric and VXLAN” -post. In this post I will show how to configure VXLAN and verify that VXLAN works by showing Multicast, VTEP and general switching outputs. VXLAN configuration is actually quite a breeze after you get the Multicast and IP-Fabric configurations set up. Remember that (currently, as of May 2015) QFX-series does not support VXLAN routing and you would require either MX or EX9200 for that.

Spine-switches do not require any special configuration as the VXLAN is routed L3 traffic from Spine point of view. Spine-switches just forward the traffic per routing rules, and they do not care whether it’s VXLAN traffic or something else. Also using this configuration you do not need any special configuration on the Host, just match the VLAN ID specified on the trunk. See the topology below:

Read More

Juniper QFX, IP-Fabric and VXLAN – Part 1

See the second part here: Juniper QFX, IP-Fabric and VXLAN – Part 2

Recently I have been lab testing and evaluating some Juniper QFX switches and new DC LAN architectures. In this and upcoming posts I will show some configuration guides and hints regarding Juniper QFX (5100-48Q and 5100-48S), IP-Fabric (complete L3 eBGP-fabric) and VXLAN configuration. Of course the fabric could use iBGP, OSPF or IS-IS if you wanted so, I just decided to go with eBGP due to some traffic engineering features. L3 Fabric poses some interesting questions and issues what we needn’t think in previous “old school” L2 networks.

  • Bare-metal server connectivity and L2 dual homing
  • Virtual-to-Virtual, Virtual-to-Physical, Physical-to-Physical
  • L2 overlay which is still needed (not only for vMotion)
  • Firewall, load balancer connectivity (talking about non-overlay, non-VXLAN, devices)
  • DCI

As you probably know the VXLAN is used as an overlay to bring L2 visibility over a routed L3 network using MAC-in-UDP encapsulation. This can be used for applications that require L2 connectivity. I’m not going to deep dive into how VXLAN works, but rather post some configuration snippets and guidelines with sample topologies. In case you need more detailed specifications regarding VXLAN, please check VMWare, Cisco, Cisco Live! and Juniper documentations, as these are really good resources, especially the Cisco Live! materials are worth checking out.

The test IP-fabric design is based on Spine-Leaf architecture with eBGP running in the core. There are two spine switches (QFX5100-24Q) and two leaf switches (QFX5100-48S). All leaf switches are connected to all spine switches. Routing protocol is eBGP over point-to-point links. All switches and leafs are running on their own AS number. L2 overlay is designed with VXLAN. In this design I’m introducing directly connected servers / appliances to the VXLAN network. In the Part 1, I will show the configuration of the IP-Fabric, we’ll dive into VXLAN in the next part. See physical and logical topologies below.

Read More

Junos and Python – Junos PyEZ – Part 1

Feel like automating some configurations, monitoring and troubleshooting on Junos? Step in Python and Junos PyEZ framework (https://techwiki.juniper.net/Projects/Junos_PyEZ). Junos PyEZ is a framework for Python which allows “quite easy” approach for performing automation and configuration tasks on Junos-platform devices. It is also easy to understand for non-programmers so you really don’t have to have deep Python understanding for basic tasks. On protocol level it uses Netconf over SSH for connecting to the device.

The requirements are Python 2.7 and Junos PyEZ framework. I have my test platform running on CentOS 6.6. CentOS 6.6 comes by default with Python 2.6 so it needs updating or new Python 2.7 version installed on the side. You can check the quick guide I wrote for installing Python 2.7 on CentOS 6.6 here: http://www.networkers.fi/blog/installing-python-2-7-on-centos-6-x/

Read More

DCI with PBB-EVPN and Cisco ASR9000

Lately I have been spending some time with LAB-testing new Cisco ACI environment (more about ACI in further posts). As a multi-DC service provider DC-interconnect is of utmost importance. I have implemented some Nexus 7000/5000 environments using vPC DC-interconnect (dark fiber, CWDM) which have worked quite nicely. Especially if there are requirements of MACSEC (802.1AE) and such, N7K M-series linecards are quite nice with 10G line rate MACSEC.

However there has been some cases where direct L2-interconnect is not possible. Especially in this ACI-case if you want to stretch the fabric (one fabric) to other DC you need 8 x 40G (!) in between, which is quite a lot. You could do the ACI interconnect (two separate ACI fabrics) from Leaf to Leaf with N x 10G, but still the L2 over L3 brings some advantages.  Therefore I have been looking into different technologies for DCI. Lately I have lab-tested some PBB-EVPN (Provider Backbone Bridging – Ethernet VPN) DCI with Cisco ASR’s, and it works quite nicely I must say. In the lab tests I used 1 x 10G towards DC’s in both ends and 2 x 10G bundled between the ASR’s simulating the EVPN/MPLS in between. As there is no “real” MPLS in between, this simply can be thought of as two directly connected PE-routers. There are two extended VLAN’s in this example; 503 and 751.

Read More

The Beginning

Decided to start this blog to post some observations, notes, tips & tricks, config snippets and such regarding networking, virtualization and most likely some Unix stuff as well.

This will also work as a nice note keeping system for myself. Hopefully you’ll find something of use here. Please feel free to comment, ask questions and post your own observations.

As a little introduction, I currently work as an Network Architect for a large Data Center service provider in Finland. Nowadays my days go mostly by working with Cisco Nexus/ASR, Juniper SRX/EX/MX, Citrix Netscaler environments. Also some hands in with VMWare and Hyper-V environments included…

Welcome and have a nice read!