Daniel Micanek virtual Blog – Like normal Dan, but virtual.
Category: NSX
The “VMware NSX” blog category focuses on VMware NSX, an advanced network virtualization and security platform developed by VMware. This blog category offers extensive information and expertise on VMware NSX, including detailed guides, tutorials, technical analyses, and the latest updates.
This Blog will Guide you through the complete Installation from NAPP without using Tanzu and NSX Advanced Load Balancer. If you have Tanzu and NSX Advanced Load Balancer installed, I highly recommend to use your existing Tools! by Daniel Stich.
Introduction: One of my NSX peers was recently working on an IP address overlap issue that helped lead to a better understanding of routing behaviour within an NSX environment. The Scenario: In this corner case scenario there is IP address overlap between these two subnets: The NSX environment, [..]
Today we are going to talk about the logical routing in the VMware NSX-T environment. As many of you know that the logical routing capability in the NSX-T platform provides the ability to interconnect both virtual and physical workloads deployed in different logical L2 networks. […]
The 5 day VMware NSX Software-Defined Networking (SDN) Ninja program provides in depth coverage of networking use cases. This program is a comprehensive look at NSX architecture and components that support switching, routing, VPN, load balancing, container networking, and multi-site networking [..]
The data center landscape has radically evolved over the last decade thanks to virtualization. Before Network Virtualization Overlay (NVO), data centers were limited to 4096 broadcast domains which could be problematic for large data centers to support a multi-tenancy architecture. Virtual […]
According Release Notes for Cisco UCS Manager, Release 4.2(1l) We have a fix for CSCvz43359 Traffic using GENEVE overlay sometimes leaves wrong VNIC when GENEVE Offload is enabled on VIC14xx:
Defect ID
Symptom
First Bundle Affected
Resolved in Release
The following caveats related to NSX-T are resolved in Release 4.2(1l)
CSCvz43359
On a Cisco UCS server using an NSX-T topology, data traffic using a GENEVE overlay sometimes left the wrong vNIC when GENEVE Offload was enabled on a VIC 1400 series Fabric Interconnect. This issue is resolved.
4.2(1d)C
4.2(1l)C
Traffic using GENEVE overlay sometimes leaves wrong VNIC when GENEVE Offload is enabled on VIC14xx
Symptom: Rapid mac moves observed on Fabric Interconnect and northbound switches where mac address belongs to device using GENEVE overlay. pkcatp-uw in ESXi kernel was not able to observe this phenomenon. This was only observable via tcpdump on the physical VIC adapter in the debug shell.
Conditions: This was specifically seen in an NSX-T topology though more general use of GENEVE offloading in the hardware would likely show same behavior. The NSX-T TEP mac addresses should be ‘bound’ to a physical interface unless there is a topology change. In this circumstance, we observed the TEP macs rapidly moving from Fabric A to Fabric B and vice versa while the teaming/load balancing policy was set to Active/Active in ESXi and NSX. NSX-T uses BFD Control frames between hosts and BFD leverages GENEVE. When GENEVE Offloading is enabled in the VIC adapter policy, this causes some small number of these BFD frames to egress the wrong physical link which causes the unexpected mac move behavior on northbound devices.
This document is an informal document that walks through the step-by-step deployment and configuration workflow for NSX-T Edge Single N-VDS Multi-TEP design. This document uses NSX-T 3.0 UI to show the workflow, which is broken down into following 3 sub-workflows:
Deploy and configure the Edge node (VM & BM) with Single-NVDS Multi-TEP.
Preparing NSX-T for Layer 2 External (North-South) connectivity.
Preparing NSX-T for Layer 3 External (North-South) connectivity.
NSX-T Design with EDGE VM
Under Teamings – Add 2 Teaming Policies: one with Active Uplink as “uplink-1” and other with “uplink-2”.
Make a note of the policy name used, as we would be using this in the next section. In this example they are “PIN-TO-TOR-LEFT” and “PIN-TO-TOR-RIGHT”.
How to design NSX-T Edge inside Cisco UCS?
Cisco Fabric Interconnect using Port Chanel. You need high bandwith for NSX-T Edge load.
C220 M5 could solved it.
The edge node physical NIC definition includes the following
VMNIC0 and VMNIC1: Cisco VIC 1457
VMNIC2 and VMNIC3: Intel XXV710 adapter 1 (TEP and Overlay)
VMNIC4 and VMNIC4: Intel XXV710 adapter 2 (N/S BGP Peering)