Skip to main content

Cisco Nexus 9300 – VXLAN with BGP EVPN Control Plane – Part 1

For the last few weeks I have been configuring, testing and taking new Cisco Nexus 9300 (Nexus 9000) platform with VXLAN and BGP EVPN control plane into use. It proved to be somewhat challenging due to documentation and user experiences being so sparse. Especially as some posts, configuration guides and documentation seems to tell to do things differently. There is no clear explanation on why they’ve done it that way or another. So I decided to make this post to clear things up, and as always, if you have questions or agree/disagree on something, please comment below. Also note that this post is more a configuration guide than VXLAN (or BGP EVPN) introduction, Google and Cisco documentation can help with that. Part 2 will introduce the DCI (Data Center Interconnect) and how to implement that with VXLAN and BGP EVPN.

Two important notes before we begin:

  • If you use BGP as ingress-replication protocol, then you do not need any Multicast config!
  • Also note that the configuration below is using eBGP (iBGP configuration is quite different)! 

The infra is built with the following specs and software:

  • Spines: Cisco Nexus 9332PQ
  • Leafs: Cisco Nexus 9372PX
  • All switches are running the 7.0(3)I1(3) software (latest as of 3.9.2015)

Topology overview (DCI will be implemented in Part 2):

Topology overview

Topology in more detail:

Detailed topology

Read More

DCI with PBB-EVPN and Cisco ASR9000

Lately I have been spending some time with LAB-testing new Cisco ACI environment (more about ACI in further posts). As a multi-DC service provider DC-interconnect is of utmost importance. I have implemented some Nexus 7000/5000 environments using vPC DC-interconnect (dark fiber, CWDM) which have worked quite nicely. Especially if there are requirements of MACSEC (802.1AE) and such, N7K M-series linecards are quite nice with 10G line rate MACSEC.

However there has been some cases where direct L2-interconnect is not possible. Especially in this ACI-case if you want to stretch the fabric (one fabric) to other DC you need 8 x 40G (!) in between, which is quite a lot. You could do the ACI interconnect (two separate ACI fabrics) from Leaf to Leaf with N x 10G, but still the L2 over L3 brings some advantages.  Therefore I have been looking into different technologies for DCI. Lately I have lab-tested some PBB-EVPN (Provider Backbone Bridging – Ethernet VPN) DCI with Cisco ASR’s, and it works quite nicely I must say. In the lab tests I used 1 x 10G towards DC’s in both ends and 2 x 10G bundled between the ASR’s simulating the EVPN/MPLS in between. As there is no “real” MPLS in between, this simply can be thought of as two directly connected PE-routers. There are two extended VLAN’s in this example; 503 and 751.

Read More