IP Address Overlap in NSX – An NSX Blog

IP Address Overlap in NSX – An NSX Blog

Introduction: One of my NSX peers was recently working on an IP address overlap issue that helped lead to a better understanding of routing behaviour within an NSX environment. The Scenario: In this corner case scenario there is IP address overlap between these two subnets: The NSX environment, [..]


VMware Social Media Advocacy

2023 NSX Ninja Program For Customers

2023 NSX Ninja Program For Customers

The 5 day VMware NSX Software-Defined Networking (SDN) Ninja program provides in depth coverage of networking use cases. This program is a comprehensive look at NSX architecture and components that support switching, routing, VPN, load balancing, container networking, and multi-site networking [..]


VMware Social Media Advocacy

Multi-Tenancy Datacenter with NSX EVPN

Multi-Tenancy Datacenter with NSX EVPN

The data center landscape has radically evolved over the last decade thanks to virtualization. Before Network Virtualization Overlay (NVO), data centers were limited to 4096 broadcast domains which could be problematic for large data centers to support a multi-tenancy architecture. Virtual […]


VMware Social Media Advocacy

CSCvz43359 Traffic using GENEVE overlay sometimes leaves wrong VNIC when GENEVE Offload is enabled on VIC14xx – FIX

According Release Notes for Cisco UCS Manager, Release 4.2(1l) We have a fix for CSCvz43359 Traffic using GENEVE overlay sometimes leaves wrong VNIC when GENEVE Offload is enabled on VIC14xx:

Defect IDSymptomFirst Bundle AffectedResolved in Release
The following caveats related to NSX-T are resolved in Release 4.2(1l)
CSCvz43359On a Cisco UCS server using an NSX-T topology, data traffic using a GENEVE overlay sometimes left the wrong vNIC when GENEVE Offload was enabled on a VIC 1400 series Fabric Interconnect. This issue is resolved.4.2(1d)C4.2(1l)C

Traffic using GENEVE overlay sometimes leaves wrong VNIC when GENEVE Offload is enabled on VIC14xx

Symptom: Rapid mac moves observed on Fabric Interconnect and northbound switches where mac address belongs to device using GENEVE overlay. pkcatp-uw in ESXi kernel was not able to observe this phenomenon. This was only observable via tcpdump on the physical VIC adapter in the debug shell.

Conditions: This was specifically seen in an NSX-T topology though more general use of GENEVE offloading in the hardware would likely show same behavior. The NSX-T TEP mac addresses should be ‘bound’ to a physical interface unless there is a topology change. In this circumstance, we observed the TEP macs rapidly moving from Fabric A to Fabric B and vice versa while the teaming/load balancing policy was set to Active/Active in ESXi and NSX. NSX-T uses BFD Control frames between hosts and BFD leverages GENEVE. When GENEVE Offloading is enabled in the VIC adapter policy, this causes some small number of these BFD frames to egress the wrong physical link which causes the unexpected mac move behavior on northbound devices.

Links:

ESXi host fails with PSOD “#PF Exception 14 in world xxxx:nsx-cfgagent” during bulk vMotions in a NSX-T Environment (87352)

Be aware of the issues below found in NSX-t 3.1.3 and 3.1.3.x. If you are considering moving to NSX-T 3.1.3.x, please upgrade directly to 3.1.3.6.1.

This issue is observed when bulk vMotions occur in the NSX-T environment, following are some of the probable scenarios:

  • Migration of multiple VMs with each VM comprising of multiple vNICs
  • Multiple IP sets configured in CIDR form per rule
  • Multiple rules containing same IP Sets
  • VMs from a non-upgraded NSX-T host migrated to an upgraded NSX-T host
PSOD_cfagent.PNG
Above scenarios may lead to PSOD with following Back trace :

Workaround

  • Set DRS to manual on the ESXi Cluster and avoid performing bulk vMotions

Fix

Fixed in NSX-T 3.1.3.6, but recommended to upgrade directly to 3.1.3.6.1 due to the issue below.

NSX-T 3.1.3.6 Edge configured with an L4 LB stops passing all traffic (87627)

This issue is fixed in NSX-T 3.1.3.6.1. Also, NSX-T 3.2 is not impacted by this.

Links

NSX-T Edge design guide for Cisco UCS

How to design NSX-T Edge inside Cisco UCS? I can’t find it inside Cisco Design Guide. But I find usefull topology inside Dell EMC VxBlock™ Systems, VMware® NSX-T Reference Design  and NSX-T 3.0 Edge Design Step-by-Step UI workflow. Thanks DELL and VMware …

VMware® NSX-T Reference Design

  • VDS Design update – New capability of deploying NSX on top of VDS with NSX
  • VSAN Baseline Recommendation for Management and Edge Components
  • VRF Based Routing and other enhancements
  • Updated security functionality
  • Design changes that goes with VDS with NSX
  • Performance updates

NSX-T 3.0 Edge Design Step-by-Step UI workflow

This document is an informal document that walks through the step-by-step deployment and configuration workflow for NSX-T Edge Single N-VDS Multi-TEP design.  This document uses NSX-T 3.0 UI to show the workflow, which is broken down into following 3 sub-workflows:

  1. Deploy and configure the Edge node (VM & BM) with Single-NVDS Multi-TEP.
  2. Preparing NSX-T for Layer 2 External (North-South) connectivity.
  3. Preparing NSX-T for Layer 3 External (North-South) connectivity.

NSX-T Design with EDGE VM

  • Under Teamings – Add 2 Teaming Policies: one with Active Uplink as “uplink-1” and other with “uplink-2”.
  • Make a note of the policy name used, as we would be using this in the next section. In this example they are “PIN-TO-TOR-LEFT” and “PIN-TO-TOR-RIGHT”.

How to design NSX-T Edge inside Cisco UCS?

Cisco Fabric Interconnect using Port Chanel. You need high bandwith for NSX-T Edge load.

C220 M5 could solved it.

The edge node physical NIC definition includes the following

  • VMNIC0 and VMNIC1: Cisco VIC 1457
  • VMNIC2 and VMNIC3: Intel XXV710 adapter 1 (TEP and Overlay)
  • VMNIC4 and VMNIC4: Intel XXV710 adapter 2 (N/S BGP Peering)
NSX-T transport nodes with Cisco UCS C220 M5
Logical topology of the physical edge host

Or for PoC or Lab – Uplink Eth Interfaces

For PoC od HomeLAB We could use Uplink Eth Interfaces and create vNIC template linked to these uplink.

Links: