ESXi on ASUS NUC 14 Performance (Scorpion Canyon)

ESXi on ASUS NUC 14 Performance (Scorpion Canyon)

In addition to the ASUS NUC 14 Pro (which I recently reviewed), ASUS has also released the ASUS NUC 14 Performance (formally known as Scorpion Canyon) as part of their Intel 14th Generation (Meteor Lake) lineup. Compared to the ASUS NUC 14 Pro and Pro+, the ASUS NUC 14 Performance offers […]


Broadcom Social Media Advocacy

Updated Dashboard for VMware Community Homelabs…

Updated Dashboard for VMware Community Homelabs…

While working on some data analysis for an internal project, I was looking for a better way to summarize and provide some visualizations of the raw data for better consumption. I also wanted to automate this process, so that I could easily build reports or dashboards regardless of the […]


Broadcom Social Media Advocacy

Quick Tip – Retrieving vSAN File Share Network…

Quick Tip – Retrieving vSAN File Share Network…

When creating a new vSAN File Share, which is powered by vSAN File Services, additional network access controls (no access, allow access from any IP or custom) can be configured. To view the configured network permissions, users must expand each file share to get the relevant information. For […]


Broadcom Social Media Advocacy

Backup and restore ESXi host configuration data…

Backup and restore ESXi host configuration data…

In some cases we need to reinstall ESXi host. To avoid time consuming setting up servers, we can quickly backup and restore host configuration. To achieve this, there are three possible ways: ESXi command line, vSphere CLI or PowerCLI In this article I will show how backup and restore host […]


Broadcom Social Media Advocacy

VMware vCenter Server 8.0 Update 3c: Fixing vSphere Client Idle Session Issue

VMware has released vCenter Server 8.0 Update 3c, bringing several key improvements and bug fixes. Among these, one notable issue addressed in this release relates to the vSphere Client’s behavior when left idle for extended periods.

PR 3439359: vSphere Client Session Becomes Unresponsive After 50 Minutes of Inactivity

In previous versions, particularly starting from vSphere 8.0 Update 3b, users encountered a frustrating issue with the vSphere Client. If a session remained idle for more than 50 minutes, the client would become unresponsive, making it impossible to log in or log out. Attempting to resume work in the same browser would yield no results unless all browser cookies were cleared. This was not only an inconvenience but also a disruption for administrators managing their vSphere environments.

Cause of the Issue: Apache Tomcat 9.0.91 Upgrade

The root of the problem was traced back to an upgrade to Apache Tomcat 9.0.91, introduced in vSphere 8.0 Update 3b. This upgrade brought with it a change in the default value of the property org.apache.catalina.connector.RECYCLE_FACADES. Previously set to FALSE, this value was altered to TRUE, causing sessions to become invalid after extended inactivity. This meant that any session left idle for over 50 minutes would not properly refresh, effectively locking the user out until they manually cleared cookies from their browser.

Links:

Quick Tip – Using PowerCLI to query VMware…

Quick Tip – Using PowerCLI to query VMware…

One of the most powerful and versatile VM management capability in vSphere is the Guest Operations API, providing a rich set of operations from transferring files to/from the guest to running commands directly on the guest as if you were logged in! An easy way to consume the Guest Operations API […]


Broadcom Social Media Advocacy

Quick Tip – SSH Server, Client & Authorized Key…

Quick Tip – SSH Server, Client & Authorized Key Configurations for ESXi 7.0 Update 1 and later

Quick Tip – SSH Server, Client & Authorized Key…

The general best practice is to disable SSH on your ESXi host by default and if/when you need access, you can turn it on temporarily and disable it when you have completed your task. For users that need to modify the default SSH configurations whether that is on the server side, client side or setting […]


Broadcom Social Media Advocacy

Intel Skylake CPUs Reaching End of Support in Future vSphere Releases after 8.x

As the IT industry continues to evolve, so do the platforms and hardware that support our digital infrastructure. One significant upcoming change is related to Intel’s Skylake generation of processors, which has entered the End of Servicing Update (ESU) and End of Servicing Lifetime (EOSL) phase. By December 31, 2023, Intel will officially stop providing updates for Skylake server-class processors, including the Xeon Scalable Processors (SP) series. This change is set to impact future VMware vSphere releases, as VMware plans to discontinue support for Intel Skylake CPUs in its next major release following vSphere 8.x.

Why Skylake CPUs are Being Phased Out

Intel’s Skylake architecture, introduced in 2015, has been widely adopted in server environments for its balance of performance and power efficiency. The Xeon Scalable Processor series, which is part of the Skylake generation, has been foundational in many data centers around the world. However, as technology progresses, older generations of processors become less relevant in the context of modern workloads and new advancements in CPU architectures.

Impact on VMware vSphere Users

With VMware announcing plans to drop support for Skylake CPUs in a future major release after vSphere 8.x, organizations relying on these processors need to start planning for hardware refreshes. As VMware’s virtualization platform evolves, it is optimized for more modern CPU architectures that offer enhanced performance, security, and energy efficiency.

More info CPU Support Deprecation and Discontinuation In vSphere Releases

Cisco UCS Manager Release 4.3(4a): New Optimized Adapter Policies for VIC Series Adapters

Starting with Cisco UCS Manager release 4.3(4a), Cisco has introduced optimized adapter policies for Windows, Linux, and VMware operating systems, including a new policy for VMware environments called “VMware-v2.” This update affects the Cisco UCS VIC 1400, 14000, and 15000 series adapters, promising improved performance and flexibility.

This release is particularly interesting for those managing VMware infrastructures, as many organizations—including ours—have been using similar settings for years. However, one notable difference is that the default configuration in the new policy sets Interrupts to 11, while in our environment, we’ve historically set it to 12.

Key Enhancements in UCS 4.3(4a)

  1. Optimized Adapter Policies: The new “VMware-v2” policy is tailored to enhance performance in VMware environments, specifically for the Cisco UCS VIC 1400, 14000, and 15000 adapters. It adjusts parameters such as the number of interrupts, queue depths, and receive/transmit buffers to achieve better traffic handling and lower latency.
  2. Receive Side Scaling (RSS): A significant feature available on the Cisco UCS VIC series is Receive Side Scaling (RSS). RSS is crucial for servers handling large volumes of network traffic as it allows the incoming network packets to be distributed across multiple CPU cores, enabling parallel processing. This distribution improves the overall throughput and reduces bottlenecks caused by traffic being handled by a single core. In high-performance environments like VMware, this can lead to a noticeable improvement in network performance. RSS is enabled on a per-vNIC basis, meaning administrators have granular control over which virtual network interfaces benefit from the feature. Given the nature of modern server workloads, enabling RSS on vNICs handling critical traffic can substantially improve performance, particularly in environments with multiple virtual machines.
  3. Maximizing Ring Size: Another important recommendation for administrators using the VIC 1400 adapters is to set the ringsize to the maximum, which for these adapters is 4096. The ring size determines how much data can be queued for processing by the NIC (Network Interface Card) before being handled by the CPU. A larger ring size allows for better performance, especially when dealing with bursts of high traffic.In environments where high throughput and low latency are critical, setting the ring size to its maximum value ensures that traffic can be handled more efficiently, reducing the risk of packet drops or excessive buffering.

Links: